Merge "ceph: remove deprecated glance_store options"
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index eeb1f21..d4968a6 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -63,11 +63,6 @@
* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
* Fumihiko Kakuma <kakuma@valinux.co.jp>
-Sahara
-~~~~~~
-
-* Sergey Lukjanov <slukjanov@mirantis.com>
-
Swift
~~~~~
diff --git a/README.md b/README.md
index 455e1c6..750190b 100644
--- a/README.md
+++ b/README.md
@@ -117,19 +117,13 @@
# RPC Backend
-Multiple RPC backends are available. Currently, this
-includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of
-choice may be selected via the `localrc` section.
+Support for a RabbitMQ RPC backend is included. Additional RPC backends may
+be available via external plugins. Enabling or disabling RabbitMQ is handled
+via the usual service functions and ``ENABLED_SERVICES``.
-Note that selecting more than one RPC backend will result in a failure.
+Example disabling RabbitMQ in ``local.conf``:
-Example (ZeroMQ):
-
- ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-qpid,zeromq"
-
-Example (Qpid):
-
- ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-zeromq,qpid"
+ disable_service rabbit
# Apache Frontend
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 3e9aa45..6e3ec02 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -26,7 +26,7 @@
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [ 'oslosphinx' ]
+extensions = [ 'oslosphinx', 'sphinxcontrib.blockdiag', 'sphinxcontrib.nwdiag' ]
todo_include_todos = True
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 8e2e7ff..e91012f 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -201,7 +201,7 @@
| *Defaults: ``LOGFILE="" LOGDAYS=7 LOG_COLOR=True``*
| By default ``stack.sh`` output is only written to the console
- where is runs. It can be sent to a file in addition to the console
+ where it runs. It can be sent to a file in addition to the console
by setting ``LOGFILE`` to the fully-qualified name of the
destination log file. A timestamp will be appended to the given
filename for each run of ``stack.sh``.
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index d3b491f..f61002b 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -2,151 +2,157 @@
FAQ
===
-- `General Questions <#general>`__
-- `Operation and Configuration <#ops_conf>`__
-- `Miscellaneous <#misc>`__
+.. contents::
+ :local:
General Questions
=================
-Q: Can I use DevStack for production?
- A: No. We mean it. Really. DevStack makes some implementation
- choices that are not appropriate for production deployments. We
- warned you!
-Q: Then why selinux in enforcing mode?
- A: That is the default on current Fedora and RHEL releases. DevStack
- has (rightly so) a bad reputation for its security practices; it has
- always been meant as a development tool first and system integration
- later. This is changing as the security issues around OpenStack's
- use of root (for example) have been tightened and developers need to
- be better equipped to work in these environments. ``stack.sh``'s use
- of root is primarily to support the activities that would be handled
- by packaging in "real" deployments. To remove additional protections
- that will be desired/required in production would be a step
- backward.
-Q: But selinux is disabled in RHEL!
- A: Today it is, yes. That is a specific exception that certain
- DevStack contributors fought strongly against. The primary reason it
- was allowed was to support using RHEL6 as the Python 2.6 test
- platform and that took priority time-wise. This will not be the case
- with RHEL 7.
-Q: Why a shell script, why not chef/puppet/...
- A: The script is meant to be read by humans (as well as ran by
- computers); it is the primary documentation after all. Using a
- recipe system requires everyone to agree and understand chef or
- puppet.
-Q: Why not use Crowbar?
- A: DevStack is optimized for documentation & developers. As some of
- us use `Crowbar <https://github.com/dellcloudedge/crowbar>`__ for
- production deployments, we hope developers documenting how they
- setup systems for new features supports projects like Crowbar.
-Q: I'd like to help!
- A: That isn't a question, but please do! The source for DevStack is
- at
- `git.openstack.org <https://git.openstack.org/cgit/openstack-dev/devstack>`__
- and bug reports go to
- `LaunchPad <http://bugs.launchpad.net/devstack/>`__. Contributions
- follow the usual process as described in the `developer
- guide <http://docs.openstack.org/infra/manual/developers.html>`__. This Sphinx
- documentation is housed in the doc directory.
-Q: Why not use packages?
- A: Unlike packages, DevStack leaves your cloud ready to develop -
- checkouts of the code and services running in screen. However, many
- people are doing the hard work of packaging and recipes for
- production deployments. We hope this script serves as a way to
- communicate configuration changes between developers and packagers.
-Q: Why isn't $MY\_FAVORITE\_DISTRO supported?
- A: DevStack is meant for developers and those who want to see how
- OpenStack really works. DevStack is known to run on the
- distro/release combinations listed in ``README.md``. DevStack is
- only supported on releases other than those documented in
- ``README.md`` on a best-effort basis.
-Q: What about Fedora/RHEL/CentOS?
- A: Fedora and CentOS/RHEL are supported via rpm dependency files and
- specific checks in ``stack.sh``. Support will follow the pattern set
- with the Ubuntu testing, i.e. only a single release of the distro
- will receive regular testing, others will be handled on a
- best-effort basis.
-Q: Are there any differences between Ubuntu and Fedora support?
- A: Neutron is not fully supported prior to Fedora 18 due lack of
- OpenVSwitch packages.
-Q: Why can't I use another shell?
- A: DevStack now uses some specific bash-ism that require Bash 4, such
- as associative arrays. Simple compatibility patches have been accepted
- in the past when they are not complex, at this point no additional
- compatibility patches will be considered except for shells matching
- the array functionality as it is very ingrained in the repo and project
- management.
-Q: But, but, can't I test on OS/X?
- A: Yes, even you, core developer who complained about this, needs to
- install bash 4 via homebrew to keep running tests on OS/X. Get a Real
- Operating System. (For most of you who don't know, I am referring to
- myself.)
+Can I use DevStack for production?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack is targeted at developers and CI systems to use the raw
+upstream code. It makes many choices that are not appropriate for
+production systems.
+
+Your best choice is probably to choose a `distribution of OpenStack
+<https://www.openstack.org/marketplace/distros/distribution>`__.
+
+Why a shell script, why not chef/puppet/...
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The script is meant to be read by humans (as well as ran by
+computers); it is the primary documentation after all. Using a recipe
+system requires everyone to agree and understand chef or puppet.
+
+I'd like to help!
+~~~~~~~~~~~~~~~~~
+
+That isn't a question, but please do! The source for DevStack is at
+`git.openstack.org
+<https://git.openstack.org/cgit/openstack-dev/devstack>`__ and bug
+reports go to `LaunchPad
+<http://bugs.launchpad.net/devstack/>`__. Contributions follow the
+usual process as described in the `developer guide
+<http://docs.openstack.org/infra/manual/developers.html>`__. This
+Sphinx documentation is housed in the doc directory.
+
+Why not use packages?
+~~~~~~~~~~~~~~~~~~~~~
+
+Unlike packages, DevStack leaves your cloud ready to develop -
+checkouts of the code and services running in screen. However, many
+people are doing the hard work of packaging and recipes for production
+deployments.
+
+Why isn't $MY\_FAVORITE\_DISTRO supported?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack is meant for developers and those who want to see how
+OpenStack really works. DevStack is known to run on the distro/release
+combinations listed in ``README.md``. DevStack is only supported on
+releases other than those documented in ``README.md`` on a best-effort
+basis.
+
+Are there any differences between Ubuntu and Centos/Fedora support?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Both should work well and are tested by DevStack CI.
+
+Why can't I use another shell?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack now uses some specific bash-ism that require Bash 4, such as
+associative arrays. Simple compatibility patches have been accepted in
+the past when they are not complex, at this point no additional
+compatibility patches will be considered except for shells matching
+the array functionality as it is very ingrained in the repo and
+project management.
+
+Can I test on OS/X?
+~~~~~~~~~~~~~~~~~~~
+
+Some people have success with bash 4 installed via homebrew to keep
+running tests on OS/X.
+
+Can I at least source ``openrc`` with ``zsh``?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+People have reported success with a special function to run ``openrc``
+through bash for this
+
+.. code-block:: bash
+
+ function sourceopenrc {
+ pushd ~/devstack >/dev/null
+ eval $(bash -c ". openrc $1 $2;env|sed -n '/OS_/ { s/^/export /;p}'")
+ popd >/dev/null
+ }
+
Operation and Configuration
===========================
-Q: Can DevStack handle a multi-node installation?
- A: Indirectly, yes. You run DevStack on each node with the
- appropriate configuration in ``local.conf``. The primary
- considerations are turning off the services not required on the
- secondary nodes, making sure the passwords match and setting the
- various API URLs to the right place.
-Q: How can I document the environment that DevStack is using?
- A: DevStack includes a script (``tools/info.sh``) that gathers the
- versions of the relevant installed apt packages, pip packages and
- git repos. This is a good way to verify what Python modules are
- installed.
-Q: How do I turn off a service that is enabled by default?
- A: Services can be turned off by adding ``disable_service xxx`` to
- ``local.conf`` (using ``n-vol`` in this example):
+Can DevStack handle a multi-node installation?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Yes, see :doc:`multinode lab guide <guides/multinode-lab>`
+
+How can I document the environment that DevStack is using?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DevStack includes a script (``tools/info.sh``) that gathers the
+versions of the relevant installed apt packages, pip packages and git
+repos. This is a good way to verify what Python modules are
+installed.
+
+How do I turn off a service that is enabled by default?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Services can be turned off by adding ``disable_service xxx`` to
+``local.conf`` (using ``n-vol`` in this example):
::
disable_service n-vol
-Q: Is enabling a service that defaults to off done with the reverse of the above?
- A: Of course!
+Is enabling a service that defaults to off done with the reverse of the above?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Of course!
::
- enable_service qpid
+ enable_service q-svc
-Q: How do I run a specific OpenStack milestone?
- A: OpenStack milestones have tags set in the git repo. Set the appropriate tag in the ``*_BRANCH`` variables in ``local.conf``. Swift is on its own release schedule so pick a tag in the Swift repo that is just before the milestone release. For example:
+How do I run a specific OpenStack milestone?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenStack milestones have tags set in the git repo. Set the
+appropriate tag in the ``*_BRANCH`` variables in ``local.conf``.
+Swift is on its own release schedule so pick a tag in the Swift repo
+that is just before the milestone release. For example:
::
[[local|localrc]]
- GLANCE_BRANCH=stable/juno
- HORIZON_BRANCH=stable/juno
- KEYSTONE_BRANCH=stable/juno
- NOVA_BRANCH=stable/juno
- GLANCE_BRANCH=stable/juno
- NEUTRON_BRANCH=stable/juno
- SWIFT_BRANCH=2.2.1
+ GLANCE_BRANCH=stable/kilo
+ HORIZON_BRANCH=stable/kilo
+ KEYSTONE_BRANCH=stable/kilo
+ NOVA_BRANCH=stable/kilo
+ GLANCE_BRANCH=stable/kilo
+ NEUTRON_BRANCH=stable/kilo
+ SWIFT_BRANCH=2.3.0
-Q: Why not use [STRIKEOUT:``tools/pip-requires``]\ ``requirements.txt`` to grab project dependencies?
- [STRIKEOUT:The majority of deployments will use packages to install
- OpenStack that will have distro-based packages as dependencies.
- DevStack installs as many of these Python packages as possible to
- mimic the expected production environment.] Certain Linux
- distributions have a 'lack of workaround' in their Python
- configurations that installs vendor packaged Python modules and
- pip-installed modules to the SAME DIRECTORY TREE. This is causing
- heartache and moving us in the direction of installing more modules
- from PyPI than vendor packages. However, that is only being done as
- necessary as the packaging needs to catch up to the development
- cycle anyway so this is kept to a minimum.
-Q: What can I do about RabbitMQ not wanting to start on my fresh new VM?
- A: This is often caused by ``erlang`` not being happy with the
- hostname resolving to a reachable IP address. Make sure your
- hostname resolves to a working IP address; setting it to 127.0.0.1
- in ``/etc/hosts`` is often good enough for a single-node
- installation. And in an extreme case, use ``clean.sh`` to eradicate
- it and try again.
-Q: How can I set up Heat in stand-alone configuration?
- A: Configure ``local.conf`` thusly:
+What can I do about RabbitMQ not wanting to start on my fresh new VM?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is often caused by ``erlang`` not being happy with the hostname
+resolving to a reachable IP address. Make sure your hostname resolves
+to a working IP address; setting it to 127.0.0.1 in ``/etc/hosts`` is
+often good enough for a single-node installation. And in an extreme
+case, use ``clean.sh`` to eradicate it and try again.
+
+Configure ``local.conf`` thusly:
::
@@ -156,22 +162,25 @@
KEYSTONE_SERVICE_HOST=<keystone-host>
KEYSTONE_AUTH_HOST=<keystone-host>
-Q: Why are my configuration changes ignored?
- A: You may have run into the package prerequisite installation
- timeout. ``tools/install_prereqs.sh`` has a timer that skips the
- package installation checks if it was run within the last
- ``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
- ``FORCE_PREREQ=1`` and the package checks will never be skipped.
+Why are my configuration changes ignored?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You may have run into the package prerequisite installation
+timeout. ``tools/install_prereqs.sh`` has a timer that skips the
+package installation checks if it was run within the last
+``PREREQ_RERUN_HOURS`` hours (default is 2). To override this, set
+``FORCE_PREREQ=1`` and the package checks will never be skipped.
Miscellaneous
=============
-Q: ``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
- A: [Another not-a-question] No it isn't. Stuff in there is to
- correct problems in an environment that need to be fixed elsewhere
- or may/will be fixed in a future release. In the case of
- ``httplib2`` and ``prettytable`` specific problems with specific
- versions are being worked around. If later releases have those
- problems than we'll add them to the script. Knowing about the broken
- future releases is valuable rather than polling to see if it has
- been fixed.
+``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Stuff in there is to correct problems in an environment that need to
+be fixed elsewhere or may/will be fixed in a future release. In the
+case of ``httplib2`` and ``prettytable`` specific problems with
+specific versions are being worked around. If later releases have
+those problems than we'll add them to the script. Knowing about the
+broken future releases is valuable rather than polling to see if it
+has been fixed.
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index b2617c9..27d71f1 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -178,7 +178,7 @@
MYSQL_HOST=192.168.42.11
RABBIT_HOST=192.168.42.11
GLANCE_HOSTPORT=192.168.42.11:9292
- ENABLED_SERVICES=n-cpu,n-net,n-api,c-sch,c-api,c-vol
+ ENABLED_SERVICES=n-cpu,n-net,n-api,c-vol
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://192.168.42.11:6080/vnc_auto.html"
VNCSERVER_LISTEN=$HOST_IP
diff --git a/doc/source/guides/neutron.rst b/doc/source/guides/neutron.rst
index b0a8907..40a5632 100644
--- a/doc/source/guides/neutron.rst
+++ b/doc/source/guides/neutron.rst
@@ -5,11 +5,77 @@
This guide will walk you through using OpenStack neutron with the ML2
plugin and the Open vSwitch mechanism driver.
-Network Interface Configuration
-===============================
-To use neutron, it is suggested that two network interfaces be present
-in the host operating system.
+Using Neutron with a Single Interface
+=====================================
+
+In some instances, like on a developer laptop, there is only one
+network interface that is available. In this scenario, the physical
+interface is added to the Open vSwitch bridge, and the IP address of
+the laptop is migrated onto the bridge interface. That way, the
+physical interface can be used to transmit tenant network traffic,
+the OpenStack API traffic, and management traffic.
+
+
+Physical Network Setup
+----------------------
+
+In most cases where DevStack is being deployed with a single
+interface, there is a hardware router that is being used for external
+connectivity and DHCP. The developer machine is connected to this
+network and is on a shared subnet with other machines.
+
+.. nwdiag::
+
+ nwdiag {
+ inet [ shape = cloud ];
+ router;
+ inet -- router;
+
+ network hardware_network {
+ address = "172.18.161.0/24"
+ router [ address = "172.18.161.1" ];
+ devstack_laptop [ address = "172.18.161.6" ];
+ }
+ }
+
+
+DevStack Configuration
+----------------------
+
+
+::
+
+ HOST_IP=172.18.161.6
+ SERVICE_HOST=172.18.161.6
+ MYSQL_HOST=172.18.161.6
+ RABBIT_HOST=172.18.161.6
+ GLANCE_HOSTPORT=172.18.161.6:9292
+ ADMIN_PASSWORD=secrete
+ MYSQL_PASSWORD=secrete
+ RABBIT_PASSWORD=secrete
+ SERVICE_PASSWORD=secrete
+ SERVICE_TOKEN=secrete
+
+ ## Neutron options
+ Q_USE_SECGROUP=True
+ FLOATING_RANGE="172.18.161.1/24"
+ FIXED_RANGE="10.0.0.0/24"
+ Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
+ PUBLIC_NETWORK_GATEWAY="172.18.161.1"
+ Q_L3_ENABLED=True
+ PUBLIC_INTERFACE=eth0
+ Q_USE_PROVIDERNET_FOR_PUBLIC=True
+ OVS_PHYSICAL_BRIDGE=br-ex
+ PUBLIC_BRIDGE=br-ex
+ OVS_BRIDGE_MAPPINGS=public:br-ex
+
+
+
+
+
+Using Neutron with Multiple Interfaces
+======================================
The first interface, eth0 is used for the OpenStack management (API,
message bus, etc) as well as for ssh for an administrator to access
@@ -195,15 +261,18 @@
## Neutron Networking options used to create Neutron Subnets
- FIXED_RANGE="10.1.1.0/24"
+ FIXED_RANGE="203.0.113.0/24"
PROVIDER_SUBNET_NAME="provider_net"
PROVIDER_NETWORK_TYPE="vlan"
SEGMENTATION_ID=2010
In this configuration we are defining FIXED_RANGE to be a
-subnet that exists in the private RFC1918 address space - however
-in a real setup FIXED_RANGE would be a public IP address range, so
-that you could access your instances from the public internet.
+publicly routed IPv4 subnet. In this specific instance we are using
+the special TEST-NET-3 subnet defined in `RFC 5737 <http://tools.ietf.org/html/rfc5737>`_,
+which is used for documentation. In your DevStack setup, FIXED_RANGE
+would be a public IP address range that you or your organization has
+allocated to you, so that you could access your instances from the
+public internet.
The following is a snippet of the DevStack configuration on the
compute node.
diff --git a/doc/source/index.rst b/doc/source/index.rst
index e0c3f3a..2dd0241 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -10,6 +10,7 @@
overview
configuration
plugins
+ plugin-registry
faq
changes
hacking
@@ -19,9 +20,9 @@
#. Select a Linux Distribution
- Only Ubuntu 14.04 (Trusty), Fedora 20 and CentOS/RHEL 7 are
- documented here. OpenStack also runs and is packaged on other flavors
- of Linux such as OpenSUSE and Debian.
+ Only Ubuntu 14.04 (Trusty), Fedora 21 (or Fedora 22) and CentOS/RHEL
+ 7 are documented here. OpenStack also runs and is packaged on other
+ flavors of Linux such as OpenSUSE and Debian.
#. Install Selected OS
@@ -169,7 +170,6 @@
* `lib/nova <lib/nova.html>`__
* `lib/oslo <lib/oslo.html>`__
* `lib/rpc\_backend <lib/rpc_backend.html>`__
-* `lib/sahara <lib/sahara.html>`__
* `lib/swift <lib/swift.html>`__
* `lib/tempest <lib/tempest.html>`__
* `lib/tls <lib/tls.html>`__
@@ -180,7 +180,6 @@
* `extras.d/50-ironic.sh <extras.d/50-ironic.sh.html>`__
* `extras.d/60-ceph.sh <extras.d/60-ceph.sh.html>`__
-* `extras.d/70-sahara.sh <extras.d/70-sahara.sh.html>`__
* `extras.d/70-tuskar.sh <extras.d/70-tuskar.sh.html>`__
* `extras.d/70-zaqar.sh <extras.d/70-zaqar.sh.html>`__
* `extras.d/80-tempest.sh <extras.d/80-tempest.sh.html>`__
@@ -237,7 +236,6 @@
* `exercises/floating\_ips.sh <exercises/floating_ips.sh.html>`__
* `exercises/horizon.sh <exercises/horizon.sh.html>`__
* `exercises/neutron-adv-test.sh <exercises/neutron-adv-test.sh.html>`__
-* `exercises/sahara.sh <exercises/sahara.sh.html>`__
* `exercises/sec\_groups.sh <exercises/sec_groups.sh.html>`__
* `exercises/swift.sh <exercises/swift.sh.html>`__
* `exercises/volumes.sh <exercises/volumes.sh.html>`__
diff --git a/doc/source/plugin-registry.rst b/doc/source/plugin-registry.rst
new file mode 100644
index 0000000..c5c4e1e
--- /dev/null
+++ b/doc/source/plugin-registry.rst
@@ -0,0 +1,75 @@
+..
+ Note to reviewers: the intent of this file is to be easy for
+ community members to update. As such fast approving (single core +2)
+ is fine as long as you've identified that the plugin listed actually exists.
+
+==========================
+ DevStack Plugin Registry
+==========================
+
+Since we've created the external plugin mechanism, it's gotten used by
+a lot of projects. The following is a list of plugins that currently
+exist. Any project that wishes to list their plugin here is welcomed
+to.
+
+Official OpenStack Projects
+===========================
+
+The following are plugins that exist for official OpenStack projects.
+
++--------------------+-------------------------------------------+--------------------+
+|Plugin Name |URL |Comments |
++--------------------+-------------------------------------------+--------------------+
+|magnum |git://git.openstack.org/openstack/magnum | |
++--------------------+-------------------------------------------+--------------------+
+|sahara |git://git.openstack.org/openstack/sahara | |
++--------------------+-------------------------------------------+--------------------+
+|trove |git://git.openstack.org/openstack/trove | |
++--------------------+-------------------------------------------+--------------------+
+|zaqar |git://git.openstack.org/openstack/zarar | |
++--------------------+-------------------------------------------+--------------------+
+
+
+
+Drivers
+=======
+
++--------------------+-------------------------------------------------+------------------+
+|Plugin Name |URL |Comments |
++--------------------+-------------------------------------------------+------------------+
+|dragonflow |git://git.openstack.org/openstack/dragonflow |[d1]_ |
++--------------------+-------------------------------------------------+------------------+
+|odl |git://git.openstack.org/openstack/networking-odl |[d2]_ |
++--------------------+-------------------------------------------------+------------------+
+
+.. [d1] demonstrates example of installing 3rd party SDN controller
+.. [d2] demonstrates a pretty advanced set of modes that that allow
+ one to run OpenDayLight either from a pre-existing install, or
+ also from source
+
+Alternate Configs
+=================
+
++-------------+------------------------------------------------------------+------------+
+| Plugin Name | URL | Comments |
+| | | |
++-------------+------------------------------------------------------------+------------+
+|glusterfs |git://git.openstack.org/stackforge/devstack-plugin-glusterfs| |
++-------------+------------------------------------------------------------+------------+
+| | | |
++-------------+------------------------------------------------------------+------------+
+
+Additional Services
+===================
+
++-------------+------------------------------------------+------------+
+| Plugin Name | URL | Comments |
+| | | |
++-------------+------------------------------------------+------------+
+|ec2-api |git://git.openstack.org/stackforge/ec2api |[as1]_ |
++-------------+------------------------------------------+------------+
+| | | |
++-------------+------------------------------------------+------------+
+
+.. [as1] first functional devstack plugin, hence why used in most of
+ the examples.
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index c4ed228..b166936 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -2,103 +2,21 @@
Plugins
=======
-DevStack has a couple of plugin mechanisms to allow easily adding
-support for additional projects and features.
+The OpenStack ecosystem is wide and deep, and only growing more so
+every day. The value of DevStack is that it's simple enough to
+understand what it's doing clearly. And yet we'd like to support as
+much of the OpenStack Ecosystem as possible. We do that with plugins.
-Extras.d Hooks
-==============
+DevStack plugins are bits of bash code that live outside the DevStack
+tree. They are called through a strong contract, so these plugins can
+be sure that they will continue to work in the future as DevStack
+evolves.
-These hooks are an extension of the service calls in
-``stack.sh`` at specific points in its run, plus ``unstack.sh`` and
-``clean.sh``. A number of the higher-layer projects are implemented in
-DevStack using this mechanism.
+Plugin Interface
+================
-The script in ``extras.d`` is expected to be mostly a dispatcher to
-functions in a ``lib/*`` script. The scripts are named with a
-zero-padded two digits sequence number prefix to control the order that
-the scripts are called, and with a suffix of ``.sh``. DevStack reserves
-for itself the sequence numbers 00 through 09 and 90 through 99.
-
-Below is a template that shows handlers for the possible command-line
-arguments:
-
-::
-
- # template.sh - DevStack extras.d dispatch script template
-
- # check for service enabled
- if is_service_enabled template; then
-
- if [[ "$1" == "source" ]]; then
- # Initial source of lib script
- source $TOP_DIR/lib/template
- fi
-
- if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
- # Set up system services
- echo_summary "Configuring system services Template"
- install_package cowsay
-
- elif [[ "$1" == "stack" && "$2" == "install" ]]; then
- # Perform installation of service source
- echo_summary "Installing Template"
- install_template
-
- elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
- # Configure after the other layer 1 and 2 services have been configured
- echo_summary "Configuring Template"
- configure_template
-
- elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
- # Initialize and start the template service
- echo_summary "Initializing Template"
- ##init_template
- fi
-
- if [[ "$1" == "unstack" ]]; then
- # Shut down template services
- # no-op
- :
- fi
-
- if [[ "$1" == "clean" ]]; then
- # Remove state and transient data
- # Remember clean.sh first calls unstack.sh
- # no-op
- :
- fi
- fi
-
-The arguments are:
-
-- **source** - Called by each script that utilizes ``extras.d`` hooks;
- this replaces directly sourcing the ``lib/*`` script.
-- **stack** - Called by ``stack.sh`` three times for different phases
- of its run:
-
- - **pre-install** - Called after system (OS) setup is complete and
- before project source is installed.
- - **install** - Called after the layer 1 and 2 projects source and
- their dependencies have been installed.
- - **post-config** - Called after the layer 1 and 2 services have
- been configured. All configuration files for enabled services
- should exist at this point.
- - **extra** - Called near the end after layer 1 and 2 services have
- been started. This is the existing hook and has not otherwise
- changed.
-
-- **unstack** - Called by ``unstack.sh`` before other services are shut
- down.
-- **clean** - Called by ``clean.sh`` before other services are cleaned,
- but after ``unstack.sh`` has been called.
-
-
-Externally Hosted Plugins
-=========================
-
-Based on the extras.d hooks, DevStack supports a standard mechansim
-for including plugins from external repositories. The plugin interface
-assumes the following:
+DevStack supports a standard mechansim for including plugins from
+external repositories. The plugin interface assumes the following:
An external git repository that includes a ``devstack/`` top level
directory. Inside this directory there can be 2 files.
@@ -118,11 +36,10 @@
default value only if the variable is unset or empty; e.g. in bash
syntax ``FOO=${FOO:-default}``.
-- ``plugin.sh`` - the actual plugin. It will be executed by devstack
- during it's run. The run order will be done in the registration
- order for these plugins, and will occur immediately after all in
- tree extras.d dispatch at the phase in question. The plugin.sh
- looks like the extras.d dispatcher above.
+- ``plugin.sh`` - the actual plugin. It is executed by devstack at
+ well defined points during a ``stack.sh`` run. The plugin.sh
+ internal structure is discussed bellow.
+
Plugins are registered by adding the following to the localrc section
of ``local.conf``.
@@ -141,49 +58,121 @@
enable_plugin ec2api git://git.openstack.org/stackforge/ec2api
-Plugins for gate jobs
----------------------
+plugin.sh contract
+==================
-All OpenStack plugins that wish to be used as gate jobs need to exist
-in OpenStack's gerrit. Both ``openstack`` namespace and ``stackforge``
-namespace are fine. This allows testing of the plugin as well as
-provides network isolation against upstream git repository failures
-(which we see often enough to be an issue).
+``plugin.sh`` is a bash script that will be called at specific points
+during ``stack.sh``, ``unstack.sh``, and ``clean.sh``. It will be
+called in the following way::
-Ideally plugins will be implemented as ``devstack`` directory inside
-the project they are testing. For example, the stackforge/ec2-api
-project has it's pluggin support in it's tree.
+ source $PATH/TO/plugin.sh <mode> [phase]
-In the cases where there is no "project tree" per say (like
-integrating a backend storage configuration such as ceph or glusterfs)
-it's also allowed to build a dedicated
-``stackforge/devstack-plugin-FOO`` project to house the plugin.
+``mode`` can be thought of as the major mode being called, currently
+one of: ``stack``, ``unstack``, ``clean``. ``phase`` is used by modes
+which have multiple points during their run where it's necessary to
+be able to execute code. All existing ``mode`` and ``phase`` points
+are considered **strong contracts** and won't be removed without a
+reasonable deprecation period. Additional new ``mode`` or ``phase``
+points may be added at any time if we discover we need them to support
+additional kinds of plugins in devstack.
-Note jobs must not require cloning of repositories during tests.
-Tests must list their repository in the ``PROJECTS`` variable for
-`devstack-gate
-<https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh>`_
-for the repository to be available to the test. Further information
-is provided in the project creator's guide.
+The current full list of ``mode`` and ``phase`` are:
-Hypervisor
-==========
+- **stack** - Called by ``stack.sh`` four times for different phases
+ of its run:
-Hypervisor plugins are fairly new and condense most hypervisor
-configuration into one place.
+ - **pre-install** - Called after system (OS) setup is complete and
+ before project source is installed.
+ - **install** - Called after the layer 1 and 2 projects source and
+ their dependencies have been installed.
+ - **post-config** - Called after the layer 1 and 2 services have
+ been configured. All configuration files for enabled services
+ should exist at this point.
+ - **extra** - Called near the end after layer 1 and 2 services have
+ been started.
-The initial plugin implemented was for Docker support and is a useful
-template for the required support. Plugins are placed in
-``lib/nova_plugins`` and named ``hypervisor-<name>`` where ``<name>`` is
-the value of ``VIRT_DRIVER``. Plugins must define the following
-functions:
+- **unstack** - Called by ``unstack.sh`` before other services are shut
+ down.
+- **clean** - Called by ``clean.sh`` before other services are cleaned,
+ but after ``unstack.sh`` has been called.
-- ``install_nova_hypervisor`` - install any external requirements
-- ``configure_nova_hypervisor`` - make configuration changes, including
- those to other services
-- ``start_nova_hypervisor`` - start any external services
-- ``stop_nova_hypervisor`` - stop any external services
-- ``cleanup_nova_hypervisor`` - remove transient data and cache
+Example plugin
+====================
+
+An example plugin would look something as follows.
+
+``devstack/settings``::
+
+ # settings file for template
+ enable_service template
+
+
+``devstack/plugin.sh``::
+
+ # plugin.sh - DevStack plugin.sh dispatch script template
+
+ function install_template {
+ ...
+ }
+
+ function init_template {
+ ...
+ }
+
+ function configure_template {
+ ...
+ }
+
+ # check for service enabled
+ if is_service_enabled template; then
+
+ if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
+ # Set up system services
+ echo_summary "Configuring system services Template"
+ install_package cowsay
+
+ elif [[ "$1" == "stack" && "$2" == "install" ]]; then
+ # Perform installation of service source
+ echo_summary "Installing Template"
+ install_template
+
+ elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+ # Configure after the other layer 1 and 2 services have been configured
+ echo_summary "Configuring Template"
+ configure_template
+
+ elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
+ # Initialize and start the template service
+ echo_summary "Initializing Template"
+ init_template
+ fi
+
+ if [[ "$1" == "unstack" ]]; then
+ # Shut down template services
+ # no-op
+ :
+ fi
+
+ if [[ "$1" == "clean" ]]; then
+ # Remove state and transient data
+ # Remember clean.sh first calls unstack.sh
+ # no-op
+ :
+ fi
+ fi
+
+Plugin Execution Order
+======================
+
+Plugins are run after in tree services at each of the stages
+above. For example, if you need something to happen before Keystone
+starts, you should do that at the ``post-config`` phase.
+
+Multiple plugins can be specified in your ``local.conf``. When that
+happens the plugins will be executed **in order** at each phase. This
+allows plugins to conceptually depend on each other through
+documenting to the user the order they must be declared. A formal
+dependency mechanism is beyond the scope of the current work.
System Packages
===============
@@ -205,3 +194,47 @@
- ``./devstack/files/rpms-suse/$plugin_name`` - Packages to install when
running on SUSE Linux or openSUSE.
+
+
+Using Plugins in the OpenStack Gate
+===================================
+
+For everyday use, DevStack plugins can exist in any git tree that's
+accessible on the internet. However, when using DevStack plugins in
+the OpenStack gate, they must live in projects in OpenStack's
+gerrit. Both ``openstack`` namespace and ``stackforge`` namespace are
+fine. This allows testing of the plugin as well as provides network
+isolation against upstream git repository failures (which we see often
+enough to be an issue).
+
+Ideally a plugin will be included within the ``devstack`` directory of
+the project they are being tested. For example, the stackforge/ec2-api
+project has its pluggin support in its own tree.
+
+However, some times a DevStack plugin might be used solely to
+configure a backend service that will be used by the rest of
+OpenStack, so there is no "project tree" per say. Good examples
+include: integration of back end storage (e.g. ceph or glusterfs),
+integration of SDN controllers (e.g. ovn, OpenDayLight), or
+integration of alternate RPC systems (e.g. zmq, qpid). In these cases
+the best practice is to build a dedicated
+``stackforge/devstack-plugin-FOO`` project.
+
+To enable a plugin to be used in a gate job, the following lines will
+be needed in your project.yaml definition::
+
+ # Because we are testing a non standard project, add the
+ # our project repository. This makes zuul do the right
+ # reference magic for testing changes.
+ export PROJECTS="stackforge/ec2-api $PROJECTS"
+
+ # note the actual url here is somewhat irrelevant because it
+ # caches in nodepool, however make it a valid url for
+ # documentation purposes.
+ export DEVSTACK_LOCAL_CONFIG="enable_plugin ec2-api git://git.openstack.org/stackforge/ec2-api"
+
+See Also
+========
+
+For additional inspiration on devstack plugins you can check out the
+`Plugin Registry <plugin-registry.html>`_.
diff --git a/exercises/sahara.sh b/exercises/sahara.sh
deleted file mode 100755
index 2589e28..0000000
--- a/exercises/sahara.sh
+++ /dev/null
@@ -1,43 +0,0 @@
-#!/usr/bin/env bash
-
-# **sahara.sh**
-
-# Sanity check that Sahara started if enabled
-
-echo "*********************************************************************"
-echo "Begin DevStack Exercise: $0"
-echo "*********************************************************************"
-
-# This script exits on an error so that errors don't compound and you see
-# only the first error that occurred.
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error. It is also useful for following allowing as the install occurs.
-set -o xtrace
-
-
-# Settings
-# ========
-
-# Keep track of the current directory
-EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Import configuration
-source $TOP_DIR/openrc
-
-# Import exercise configuration
-source $TOP_DIR/exerciserc
-
-is_service_enabled sahara || exit 55
-
-$CURL_GET http://$SERVICE_HOST:8386/ 2>/dev/null | grep -q 'Auth' || die $LINENO "Sahara API isn't functioning!"
-
-set +o xtrace
-echo "*********************************************************************"
-echo "SUCCESS: End DevStack Exercise: $0"
-echo "*********************************************************************"
diff --git a/extras.d/70-sahara.sh b/extras.d/70-sahara.sh
deleted file mode 100644
index f177766..0000000
--- a/extras.d/70-sahara.sh
+++ /dev/null
@@ -1,29 +0,0 @@
-# sahara.sh - DevStack extras script to install Sahara
-
-if is_service_enabled sahara; then
- if [[ "$1" == "source" ]]; then
- # Initial source
- source $TOP_DIR/lib/sahara
- elif [[ "$1" == "stack" && "$2" == "install" ]]; then
- echo_summary "Installing sahara"
- install_sahara
- install_python_saharaclient
- cleanup_sahara
- elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
- echo_summary "Configuring sahara"
- configure_sahara
- create_sahara_accounts
- elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
- echo_summary "Initializing sahara"
- sahara_register_images
- start_sahara
- fi
-
- if [[ "$1" == "unstack" ]]; then
- stop_sahara
- fi
-
- if [[ "$1" == "clean" ]]; then
- cleanup_sahara
- fi
-fi
diff --git a/files/apache-ceilometer.template b/files/apache-ceilometer.template
index 1c57b32..79f14c3 100644
--- a/files/apache-ceilometer.template
+++ b/files/apache-ceilometer.template
@@ -1,7 +1,7 @@
Listen %PORT%
<VirtualHost *:%PORT%>
- WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP}
+ WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup ceilometer-api
WSGIScriptAlias / %WSGIAPP%
WSGIApplicationGroup %{GLOBAL}
diff --git a/files/apache-nova-api.template b/files/apache-nova-api.template
index 70ccedd..301a3bd 100644
--- a/files/apache-nova-api.template
+++ b/files/apache-nova-api.template
@@ -1,7 +1,7 @@
Listen %PUBLICPORT%
<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess nova-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+ WSGIDaemonProcess nova-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup nova-api
WSGIScriptAlias / %PUBLICWSGI%
WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
%SSLENGINE%
%SSLCERTFILE%
%SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/apache-nova-ec2-api.template b/files/apache-nova-ec2-api.template
index ae4cf94..235d958 100644
--- a/files/apache-nova-ec2-api.template
+++ b/files/apache-nova-ec2-api.template
@@ -1,7 +1,7 @@
Listen %PUBLICPORT%
<VirtualHost *:%PUBLICPORT%>
- WSGIDaemonProcess nova-ec2-api processes=5 threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
+ WSGIDaemonProcess nova-ec2-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup nova-ec2-api
WSGIScriptAlias / %PUBLICWSGI%
WSGIApplicationGroup %{GLOBAL}
@@ -13,4 +13,4 @@
%SSLENGINE%
%SSLCERTFILE%
%SSLKEYFILE%
-</VirtualHost>
\ No newline at end of file
+</VirtualHost>
diff --git a/files/debs/neutron b/files/debs/neutron
index 2d69a71..b5a457e 100644
--- a/files/debs/neutron
+++ b/files/debs/neutron
@@ -9,11 +9,9 @@
postgresql-server-dev-all
python-mysqldb
python-mysql.connector
-python-qpid # NOPRIME
dnsmasq-base
dnsmasq-utils # for dhcp_release only available in dist:precise
rabbitmq-server # NOPRIME
-qpidd # NOPRIME
sqlite3
vlan
radvd # NOPRIME
diff --git a/files/debs/nova b/files/debs/nova
index 9d9acde..346b8b3 100644
--- a/files/debs/nova
+++ b/files/debs/nova
@@ -24,10 +24,8 @@
curl
genisoimage # required for config_drive
rabbitmq-server # NOPRIME
-qpidd # NOPRIME
socat # used by ajaxterm
python-libvirt # NOPRIME
python-libxml2
python-numpy # used by websockify for spice console
python-m2crypto
-python-qpid # NOPRIME
diff --git a/files/debs/qpid b/files/debs/qpid
deleted file mode 100644
index e3bbf09..0000000
--- a/files/debs/qpid
+++ /dev/null
@@ -1 +0,0 @@
-sasl2-bin # NOPRIME
diff --git a/files/rpms-suse/devlibs b/files/rpms-suse/devlibs
index bdb630a..54d13a3 100644
--- a/files/rpms-suse/devlibs
+++ b/files/rpms-suse/devlibs
@@ -1,6 +1,5 @@
libffi-devel # pyOpenSSL
libopenssl-devel # pyOpenSSL
-libxml2-devel # lxml
libxslt-devel # lxml
postgresql-devel # psycopg2
libmysqlclient-devel # MySQL-python
diff --git a/files/rpms-suse/glance b/files/rpms-suse/glance
index 0e58425..bf512de 100644
--- a/files/rpms-suse/glance
+++ b/files/rpms-suse/glance
@@ -1,2 +1 @@
-libxml2-devel
python-devel
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index e75db89..1339799 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -11,6 +11,3 @@
sudo
vlan
radvd # NOPRIME
-
-# FIXME: qpid is not part of openSUSE, those names are tentative
-qpidd # NOPRIME
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index 6f8aef1..039456f 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -22,7 +22,3 @@
sqlite3
sudo
vlan
-
-# FIXME: qpid is not part of openSUSE, those names are tentative
-python-qpid # NOPRIME
-qpidd # NOPRIME
diff --git a/files/rpms-suse/q-l3 b/files/rpms-suse/q-l3
new file mode 100644
index 0000000..a7a190c
--- /dev/null
+++ b/files/rpms-suse/q-l3
@@ -0,0 +1,2 @@
+conntrack-tools
+keepalived
diff --git a/files/rpms-suse/trove b/files/rpms-suse/trove
deleted file mode 100644
index 96f8f29..0000000
--- a/files/rpms-suse/trove
+++ /dev/null
@@ -1 +0,0 @@
-libxslt1-dev
diff --git a/files/rpms/general b/files/rpms/general
index 7b2c00a..c3f3de8 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -25,6 +25,7 @@
libyaml-devel
gettext # used for compiling message catalogs
net-tools
-java-1.7.0-openjdk-headless # NOPRIME rhel7,f20
+java-1.7.0-openjdk-headless # NOPRIME rhel7
java-1.8.0-openjdk-headless # NOPRIME f21,f22
pyOpenSSL # version in pip uses too much memory
+iptables-services # NOPRIME f21,f22
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 8292e7b..29851be 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -11,7 +11,6 @@
openvswitch # NOPRIME
postgresql-devel
rabbitmq-server # NOPRIME
-qpid-cpp-server # NOPRIME
sqlite
sudo
radvd # NOPRIME
diff --git a/files/rpms/nova b/files/rpms/nova
index ebd6674..6eeb623 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -10,6 +10,7 @@
iputils
kpartx
kvm # NOPRIME
+qemu-kvm # NOPRIME
libvirt-bin # NOPRIME
libvirt-devel # NOPRIME
libvirt-python # NOPRIME
@@ -22,6 +23,5 @@
parted
polkit
rabbitmq-server # NOPRIME
-qpid-cpp-server # NOPRIME
sqlite
sudo
diff --git a/files/rpms/qpid b/files/rpms/qpid
deleted file mode 100644
index 41dd2f6..0000000
--- a/files/rpms/qpid
+++ /dev/null
@@ -1,3 +0,0 @@
-qpid-proton-c-devel # NOPRIME
-cyrus-sasl-lib # NOPRIME
-cyrus-sasl-plain # NOPRIME
diff --git a/functions-common b/functions-common
index 3a2f5f7..483b1fa 100644
--- a/functions-common
+++ b/functions-common
@@ -43,6 +43,25 @@
TRACK_DEPENDS=${TRACK_DEPENDS:-False}
+# Save these variables to .stackenv
+STACK_ENV_VARS="BASE_SQL_CONN DATA_DIR DEST ENABLED_SERVICES HOST_IP \
+ KEYSTONE_AUTH_PROTOCOL KEYSTONE_AUTH_URI KEYSTONE_SERVICE_URI \
+ LOGFILE OS_CACERT SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP"
+
+
+# Saves significant environment variables to .stackenv for later use
+# Refers to a lot of globals, only TOP_DIR and STACK_ENV_VARS are required to
+# function, the rest are simply saved and do not cause problems if they are undefined.
+# save_stackenv [tag]
+function save_stackenv {
+ local tag=${1:-""}
+ # Save some values we generated for later use
+ time_stamp=$(date "+$TIMESTAMP_FORMAT")
+ echo "# $time_stamp $tag" >$TOP_DIR/.stackenv
+ for i in $STACK_ENV_VARS; do
+ echo $i=${!i} >>$TOP_DIR/.stackenv
+ done
+}
# Normalize config values to True or False
# Accepts as False: 0 no No NO false False FALSE
@@ -68,6 +87,7 @@
[[ -v "$1" ]]
}
+
# Control Functions
# =================
@@ -675,9 +695,8 @@
}
# Gets or creates group
-# Usage: get_or_create_group <groupname> [<domain> <description>]
+# Usage: get_or_create_group <groupname> <domain> [<description>]
function get_or_create_group {
- local domain=${2:+--domain ${2}}
local desc="${3:-}"
local os_url="$KEYSTONE_SERVICE_URI_V3"
# Gets group id
@@ -685,34 +704,30 @@
# Creates new group with --or-show
openstack --os-token=$OS_TOKEN --os-url=$os_url \
--os-identity-api-version=3 group create $1 \
- $domain --description "$desc" --or-show \
+ --domain $2 --description "$desc" --or-show \
-f value -c id
)
echo $group_id
}
# Gets or creates user
-# Usage: get_or_create_user <username> <password> [<email> [<domain>]]
+# Usage: get_or_create_user <username> <password> <domain> [<email>]
function get_or_create_user {
- if [[ ! -z "$3" ]]; then
- local email="--email=$3"
+ if [[ ! -z "$4" ]]; then
+ local email="--email=$4"
else
local email=""
fi
- local os_cmd="openstack"
- local domain=""
- if [[ ! -z "$4" ]]; then
- domain="--domain=$4"
- os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI_V3 --os-identity-api-version=3"
- fi
# Gets user id
local user_id=$(
# Creates new user with --or-show
- $os_cmd user create \
+ openstack user create \
$1 \
--password "$2" \
+ --os-url=$KEYSTONE_SERVICE_URI_V3 \
+ --os-identity-api-version=3 \
+ --domain=$3 \
$email \
- $domain \
--or-show \
-f value -c id
)
@@ -720,18 +735,15 @@
}
# Gets or creates project
-# Usage: get_or_create_project <name> [<domain>]
+# Usage: get_or_create_project <name> <domain>
function get_or_create_project {
- # Gets project id
- local os_cmd="openstack"
- local domain=""
- if [[ ! -z "$2" ]]; then
- domain="--domain=$2"
- os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI_V3 --os-identity-api-version=3"
- fi
local project_id=$(
# Creates new project with --or-show
- $os_cmd project create $1 $domain --or-show -f value -c id
+ openstack --os-url=$KEYSTONE_SERVICE_URI_V3 \
+ --os-identity-api-version=3 \
+ project create $1 \
+ --domain=$2 \
+ --or-show -f value -c id
)
echo $project_id
}
@@ -1330,7 +1342,7 @@
if is_service_enabled $service; then
# Clean up the screen window
- screen -S $SCREEN_NAME -p $service -X kill
+ screen -S $SCREEN_NAME -p $service -X kill || true
fi
}
@@ -1671,7 +1683,7 @@
# ``ENABLED_SERVICES`` list, if they are not already present.
#
# For example:
-# enable_service qpid
+# enable_service q-svc
#
# This function does not know about the special cases
# for nova, glance, and neutron built into is_service_enabled().
@@ -1734,7 +1746,6 @@
[[ ${service} == n-cell-* && ${ENABLED_SERVICES} =~ "n-cell" ]] && enabled=0
[[ ${service} == n-cpu-* && ${ENABLED_SERVICES} =~ "n-cpu" ]] && enabled=0
[[ ${service} == "nova" && ${ENABLED_SERVICES} =~ "n-" ]] && enabled=0
- [[ ${service} == "cinder" && ${ENABLED_SERVICES} =~ "c-" ]] && enabled=0
[[ ${service} == "ceilometer" && ${ENABLED_SERVICES} =~ "ceilometer-" ]] && enabled=0
[[ ${service} == "glance" && ${ENABLED_SERVICES} =~ "g-" ]] && enabled=0
[[ ${service} == "ironic" && ${ENABLED_SERVICES} =~ "ir-" ]] && enabled=0
@@ -1947,6 +1958,19 @@
fi
}
+# Test with a finite retry loop.
+#
+function test_with_retry {
+ local testcmd=$1
+ local failmsg=$2
+ local until=${3:-10}
+ local sleep=${4:-0.5}
+
+ if ! timeout $until sh -c "while ! $testcmd; do sleep $sleep; done"; then
+ die $LINENO "$failmsg"
+ fi
+}
+
# Restore xtrace
$XTRACE
diff --git a/inc/python b/inc/python
index 3d329b5..ca185f0 100644
--- a/inc/python
+++ b/inc/python
@@ -66,7 +66,8 @@
# Wrapper for ``pip install`` to set cache and proxy environment variables
# Uses globals ``OFFLINE``, ``PIP_VIRTUAL_ENV``,
-# ``PIP_UPGRADE``, ``TRACK_DEPENDS``, ``*_proxy``
+# ``PIP_UPGRADE``, ``TRACK_DEPENDS``, ``*_proxy``,
+# ``USE_CONSTRAINTS``
# pip_install package [package ...]
function pip_install {
local xtrace=$(set +o | grep xtrace)
@@ -103,6 +104,13 @@
fi
fi
+ cmd_pip="$cmd_pip install"
+
+ # Handle a constraints file, if needed.
+ if [[ "$USE_CONSTRAINTS" == "True" ]]; then
+ cmd_pip="$cmd_pip -c $REQUIREMENTS_DIR/upper-constraints.txt"
+ fi
+
local pip_version=$(python -c "import pip; \
print(pip.__version__.strip('.')[0])")
if (( pip_version<6 )); then
@@ -116,7 +124,7 @@
https_proxy="${https_proxy:-}" \
no_proxy="${no_proxy:-}" \
PIP_FIND_LINKS=$PIP_FIND_LINKS \
- $cmd_pip install $upgrade \
+ $cmd_pip $upgrade \
$@
# Also install test requirements
@@ -128,7 +136,7 @@
https_proxy=${https_proxy:-} \
no_proxy=${no_proxy:-} \
PIP_FIND_LINKS=$PIP_FIND_LINKS \
- $cmd_pip install $upgrade \
+ $cmd_pip $upgrade \
-r $test_req
fi
}
@@ -215,22 +223,24 @@
# ``errexit`` requires us to trap the exit code when the repo is changed
local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
- if [[ $update_requirements != "changed" ]]; then
- if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
- if is_in_projects_txt $project_dir; then
- (cd $REQUIREMENTS_DIR; \
- python update.py $project_dir)
- else
- # soft update projects not found in requirements project.txt
- (cd $REQUIREMENTS_DIR; \
- python update.py -s $project_dir)
- fi
- else
+ if [[ $update_requirements != "changed" && "$USE_CONSTRAINTS" == "False" ]]; then
+ if is_in_projects_txt $project_dir; then
(cd $REQUIREMENTS_DIR; \
- python update.py $project_dir)
+ ./.venv/bin/python update.py $project_dir)
+ else
+ # soft update projects not found in requirements project.txt
+ echo "$project_dir not a constrained repository, soft enforcing requirements"
+ (cd $REQUIREMENTS_DIR; \
+ ./.venv/bin/python update.py -s $project_dir)
fi
fi
+ if [ -n "$REQUIREMENTS_DIR" ]; then
+ # Constrain this package to this project directory from here on out.
+ local name=$(awk '/^name.*=/ {print $3}' $project_dir/setup.cfg)
+ $REQUIREMENTS_DIR/.venv/bin/edit-constraints $REQUIREMENTS_DIR/upper-constraints.txt -- $name "$flags $project_dir"
+ fi
+
setup_package $project_dir $flags
# We've just gone and possibly modified the user's source tree in an
diff --git a/lib/ceilometer b/lib/ceilometer
index 1f72187..ed9b933 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -78,8 +78,13 @@
CEILOMETER_AUTH_CACHE_DIR=${CEILOMETER_AUTH_CACHE_DIR:-/var/cache/ceilometer}
CEILOMETER_WSGI_DIR=${CEILOMETER_WSGI_DIR:-/var/www/ceilometer}
-# Support potential entry-points console scripts
-CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+# Support potential entry-points console scripts in VENV or not
+if [[ ${USE_VENV} = True ]]; then
+ PROJECT_VENV["ceilometer"]=${CEILOMETER_DIR}.venv
+ CEILOMETER_BIN_DIR=${PROJECT_VENV["ceilometer"]}/bin
+else
+ CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
+fi
# Set up database backend
CEILOMETER_BACKEND=${CEILOMETER_BACKEND:-mysql}
@@ -165,16 +170,22 @@
local ceilometer_apache_conf=$(apache_site_config_for ceilometer)
local apache_version=$(get_apache_version)
+ local venv_path=""
# Copy proxy vhost and wsgi file
sudo cp $CEILOMETER_DIR/ceilometer/api/app.wsgi $CEILOMETER_WSGI_DIR/app
+ if [[ ${USE_VENV} = True ]]; then
+ venv_path="python-path=${PROJECT_VENV["ceilometer"]}/lib/$(python_version)/site-packages"
+ fi
+
sudo cp $FILES/apache-ceilometer.template $ceilometer_apache_conf
sudo sed -e "
s|%PORT%|$CEILOMETER_SERVICE_PORT|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
s|%WSGIAPP%|$CEILOMETER_WSGI_DIR/app|g;
- s|%USER%|$STACK_USER|g
+ s|%USER%|$STACK_USER|g;
+ s|%VIRTUALENV%|$venv_path|g
" -i $ceilometer_apache_conf
}
@@ -232,12 +243,14 @@
iniset $CEILOMETER_CONF DEFAULT collector_workers $API_WORKERS
${TOP_DIR}/pkg/elasticsearch.sh start
cleanup_ceilometer
- else
+ elif [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
iniset $CEILOMETER_CONF database alarm_connection mongodb://localhost:27017/ceilometer
iniset $CEILOMETER_CONF database event_connection mongodb://localhost:27017/ceilometer
iniset $CEILOMETER_CONF database metering_connection mongodb://localhost:27017/ceilometer
configure_mongodb
cleanup_ceilometer
+ else
+ die $LINENO "Unable to configure unknown CEILOMETER_BACKEND $CEILOMETER_BACKEND"
fi
if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
@@ -263,10 +276,8 @@
local packages=mongodb-server
if is_fedora; then
- # mongodb client + python bindings
- packages="${packages} mongodb pymongo"
- else
- packages="${packages} python-pymongo"
+ # mongodb client
+ packages="${packages} mongodb"
fi
install_package ${packages}
@@ -319,6 +330,21 @@
install_redis
fi
+ if [ "$CEILOMETER_BACKEND" = 'mongodb' ] ; then
+ pip_install_gr pymongo
+ fi
+
+ # Only install virt drivers if we're running nova compute
+ if is_service_enabled n-cpu ; then
+ if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
+ pip_install_gr libvirt-python
+ fi
+
+ if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
+ pip_install_gr oslo.vmware
+ fi
+ fi
+
if [ "$CEILOMETER_BACKEND" = 'es' ] ; then
${TOP_DIR}/pkg/elasticsearch.sh download
${TOP_DIR}/pkg/elasticsearch.sh install
@@ -340,22 +366,19 @@
git_clone_by_name "ceilometermiddleware"
setup_dev_lib "ceilometermiddleware"
else
- # BUG: this should be a pip_install_gr except it was never
- # included in global-requirements. Needs to be fixed by
- # https://bugs.launchpad.net/ceilometer/+bug/1441655
- pip_install ceilometermiddleware
+ pip_install_gr ceilometermiddleware
fi
}
# start_ceilometer() - Start running processes, including screen
function start_ceilometer {
- run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
- run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
- run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
- run_process ceilometer-aipmi "ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
+ run_process ceilometer-acentral "$CEILOMETER_BIN_DIR/ceilometer-agent-central --config-file $CEILOMETER_CONF"
+ run_process ceilometer-anotification "$CEILOMETER_BIN_DIR/ceilometer-agent-notification --config-file $CEILOMETER_CONF"
+ run_process ceilometer-collector "$CEILOMETER_BIN_DIR/ceilometer-collector --config-file $CEILOMETER_CONF"
+ run_process ceilometer-aipmi "$CEILOMETER_BIN_DIR/ceilometer-agent-ipmi --config-file $CEILOMETER_CONF"
if [[ "$CEILOMETER_USE_MOD_WSGI" == "False" ]]; then
- run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+ run_process ceilometer-api "$CEILOMETER_BIN_DIR/ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
else
enable_apache_site ceilometer
restart_apache_server
@@ -367,10 +390,10 @@
# Start the compute agent last to allow time for the collector to
# fully wake up and connect to the message bus. See bug #1355809
if [[ "$VIRT_DRIVER" = 'libvirt' ]]; then
- run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
+ run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF" $LIBVIRT_GROUP
fi
if [[ "$VIRT_DRIVER" = 'vsphere' ]]; then
- run_process ceilometer-acompute "ceilometer-agent-compute --config-file $CEILOMETER_CONF"
+ run_process ceilometer-acompute "$CEILOMETER_BIN_DIR/ceilometer-agent-compute --config-file $CEILOMETER_CONF"
fi
# Only die on API if it was actually intended to be turned on
@@ -381,8 +404,8 @@
fi
fi
- run_process ceilometer-alarm-notifier "ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
- run_process ceilometer-alarm-evaluator "ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
+ run_process ceilometer-alarm-notifier "$CEILOMETER_BIN_DIR/ceilometer-alarm-notifier --config-file $CEILOMETER_CONF"
+ run_process ceilometer-alarm-evaluator "$CEILOMETER_BIN_DIR/ceilometer-alarm-evaluator --config-file $CEILOMETER_CONF"
}
# stop_ceilometer() - Stop running processes
diff --git a/lib/ceph b/lib/ceph
index 25afb6c..16dcda2 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -110,7 +110,7 @@
# check_os_support_ceph() - Check if the operating system provides a decent version of Ceph
function check_os_support_ceph {
- if [[ ! ${DISTRO} =~ (trusty|f20|f21|f22) ]]; then
+ if [[ ! ${DISTRO} =~ (trusty|f21|f22) ]]; then
echo "WARNING: your distro $DISTRO does not provide (at least) the Firefly release. Please use Ubuntu Trusty or Fedora 20 (and higher)"
if [[ "$FORCE_CEPH_INSTALL" != "yes" ]]; then
die $LINENO "If you wish to install Ceph on this distribution anyway run with FORCE_CEPH_INSTALL=yes"
diff --git a/lib/cinder b/lib/cinder
index b8cf809..8117447 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -66,6 +66,10 @@
CINDER_SERVICE_PORT_INT=${CINDER_SERVICE_PORT_INT:-18776}
CINDER_SERVICE_PROTOCOL=${CINDER_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
+# What type of LVM device should Cinder use for LVM backend
+# Defaults to default, which is thick, the other valid choice
+# is thin, which as the name implies utilizes lvm thin provisioning.
+CINDER_LVM_TYPE=${CINDER_LVM_TYPE:-default}
# Default backends
# The backend format is type:name where type is one of the supported backend
@@ -432,12 +436,13 @@
_configure_tgt_for_config_d
if is_ubuntu; then
sudo service tgt restart
- elif is_fedora || is_suse; then
- restart_service tgtd
+ elif is_suse; then
+ # NOTE(dmllr): workaround restart bug
+ # https://bugzilla.suse.com/show_bug.cgi?id=934642
+ stop_service tgtd
+ start_service tgtd
else
- # note for other distros: unstack.sh also uses the tgt/tgtd service
- # name, and would need to be adjusted too
- exit_distro_not_supported "restarting tgt"
+ restart_service tgtd
fi
# NOTE(gfidente): ensure tgtd is running in debug mode
sudo tgtadm --mode system --op update --name debug --value on
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index 35ad209..411b82c 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -51,6 +51,7 @@
iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMVolumeDriver"
iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
iniset $CINDER_CONF $be_name iscsi_helper "$CINDER_ISCSI_HELPER"
+ iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
iniset $CINDER_CONF $be_name volume_clear none
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 7cd2856..0e477ca 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -11,7 +11,7 @@
MY_XTRACE=$(set +o | grep xtrace)
set +o xtrace
-MYSQL_DRIVER=${MYSQL_DRIVER:-MySQL-python}
+MYSQL_DRIVER=${MYSQL_DRIVER:-PyMySQL}
# Force over to pymysql driver by default if we are using it.
if is_service_enabled mysql; then
if [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
@@ -95,7 +95,10 @@
sudo bash -c "source $TOP_DIR/functions && \
iniset $my_conf mysqld bind-address 0.0.0.0 && \
iniset $my_conf mysqld sql_mode STRICT_ALL_TABLES && \
- iniset $my_conf mysqld default-storage-engine InnoDB"
+ iniset $my_conf mysqld default-storage-engine InnoDB \
+ iniset $my_conf mysqld max_connections 1024 \
+ iniset $my_conf mysqld query_cache_type OFF \
+ iniset $my_conf mysqld query_cache_size 0"
if [[ "$DATABASE_QUERY_LOGGING" == "True" ]]; then
@@ -165,6 +168,8 @@
pip_install_gr $MYSQL_DRIVER
if [[ "$MYSQL_DRIVER" == "MySQL-python" ]]; then
ADDITIONAL_VENV_PACKAGES+=",MySQL-python"
+ elif [[ "$MYSQL_DRIVER" == "PyMySQL" ]]; then
+ ADDITIONAL_VENV_PACKAGES+=",PyMySQL"
fi
}
diff --git a/lib/glance b/lib/glance
index 4e1bd24..4dbce9f 100644
--- a/lib/glance
+++ b/lib/glance
@@ -56,6 +56,7 @@
GLANCE_CACHE_CONF=$GLANCE_CONF_DIR/glance-cache.conf
GLANCE_POLICY_JSON=$GLANCE_CONF_DIR/policy.json
GLANCE_SCHEMA_JSON=$GLANCE_CONF_DIR/schema-image.json
+GLANCE_SWIFT_STORE_CONF=$GLANCE_CONF_DIR/glance-swift-store.conf
if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
GLANCE_SERVICE_PROTOCOL="https"
@@ -112,9 +113,7 @@
iniset $GLANCE_REGISTRY_CONF DEFAULT workers "$API_WORKERS"
iniset $GLANCE_REGISTRY_CONF paste_deploy flavor keystone
configure_auth_token_middleware $GLANCE_REGISTRY_CONF glance $GLANCE_AUTH_CACHE_DIR/registry
- if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
- iniset $GLANCE_REGISTRY_CONF DEFAULT notification_driver messaging
- fi
+ iniset $GLANCE_REGISTRY_CONF DEFAULT notification_driver messaging
iniset_rpc_backend glance $GLANCE_REGISTRY_CONF
cp $GLANCE_DIR/etc/glance-api.conf $GLANCE_API_CONF
@@ -125,9 +124,7 @@
iniset $GLANCE_API_CONF DEFAULT image_cache_dir $GLANCE_CACHE_DIR/
iniset $GLANCE_API_CONF paste_deploy flavor keystone+cachemanagement
configure_auth_token_middleware $GLANCE_API_CONF glance $GLANCE_AUTH_CACHE_DIR/api
- if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
- iniset $GLANCE_API_CONF DEFAULT notification_driver messaging
- fi
+ iniset $GLANCE_API_CONF DEFAULT notification_driver messaging
iniset_rpc_backend glance $GLANCE_API_CONF
if [ "$VIRT_DRIVER" = 'xenserver' ]; then
iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz"
@@ -145,15 +142,25 @@
# Store the images in swift if enabled.
if is_service_enabled s-proxy; then
iniset $GLANCE_API_CONF glance_store default_store swift
- iniset $GLANCE_API_CONF glance_store swift_store_auth_address $KEYSTONE_SERVICE_URI/v2.0/
- iniset $GLANCE_API_CONF glance_store swift_store_user $SERVICE_TENANT_NAME:glance-swift
- iniset $GLANCE_API_CONF glance_store swift_store_key $SERVICE_PASSWORD
iniset $GLANCE_API_CONF glance_store swift_store_create_container_on_put True
+
+ iniset $GLANCE_API_CONF glance_store swift_store_config_file $GLANCE_SWIFT_STORE_CONF
+ iniset $GLANCE_API_CONF glance_store default_swift_reference ref1
iniset $GLANCE_API_CONF glance_store stores "file, http, swift"
+
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_TENANT_NAME:glance-swift
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v2.0/
+
+ # commenting is not strictly necessary but it's confusing to have bad values in conf
+ inicomment $GLANCE_API_CONF glance_store swift_store_user
+ inicomment $GLANCE_API_CONF glance_store swift_store_key
+ inicomment $GLANCE_API_CONF glance_store swift_store_auth_address
fi
if is_service_enabled tls-proxy; then
iniset $GLANCE_API_CONF DEFAULT bind_port $GLANCE_SERVICE_PORT_INT
+ iniset $GLANCE_API_CONF DEFAULT public_endpoint $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT
iniset $GLANCE_REGISTRY_CONF DEFAULT bind_port $GLANCE_REGISTRY_PORT_INT
fi
@@ -253,7 +260,7 @@
if is_service_enabled s-proxy; then
local glance_swift_user=$(get_or_create_user "glance-swift" \
- "$SERVICE_PASSWORD" "glance-swift@example.com")
+ "$SERVICE_PASSWORD" "default" "glance-swift@example.com")
get_or_add_user_project_role "ResellerAdmin" $glance_swift_user $SERVICE_TENANT_NAME
fi
diff --git a/lib/infra b/lib/infra
index c825b4e..3d68e45 100644
--- a/lib/infra
+++ b/lib/infra
@@ -29,8 +29,17 @@
# install_infra() - Collect source and prepare
function install_infra {
+ local PIP_VIRTUAL_ENV="$REQUIREMENTS_DIR/.venv"
# bring down global requirements
git_clone $REQUIREMENTS_REPO $REQUIREMENTS_DIR $REQUIREMENTS_BRANCH
+ [ ! -d $PIP_VIRTUAL_ENV ] && virtualenv $PIP_VIRTUAL_ENV
+ # We don't care about testing git pbr in the requirements venv.
+ PIP_VIRTUAL_ENV=$PIP_VIRTUAL_ENV pip_install -U pbr
+ PIP_VIRTUAL_ENV=$PIP_VIRTUAL_ENV pip_install $REQUIREMENTS_DIR
+
+ # Unset the PIP_VIRTUAL_ENV so that PBR does not end up trapped
+ # down the VENV well
+ unset PIP_VIRTUAL_ENV
# Install pbr
if use_library_from_git "pbr"; then
diff --git a/lib/ironic b/lib/ironic
index 4984be1..cff20c9 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -366,7 +366,7 @@
fi
iniset $IRONIC_CONF_FILE glance swift_endpoint_url http://${HOST_IP}:${SWIFT_DEFAULT_BIND_PORT:-8080}
iniset $IRONIC_CONF_FILE glance swift_api_version v1
- local tenant_id=$(get_or_create_project $SERVICE_TENANT_NAME)
+ local tenant_id=$(get_or_create_project $SERVICE_TENANT_NAME default)
iniset $IRONIC_CONF_FILE glance swift_account AUTH_${tenant_id}
iniset $IRONIC_CONF_FILE glance swift_container glance
iniset $IRONIC_CONF_FILE glance swift_temp_url_duration 3600
@@ -658,6 +658,10 @@
# agent ramdisk gets instance image from swift
sudo iptables -I INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true
fi
+
+ if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
+ sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_HTTP_PORT -j ACCEPT || true
+ fi
}
function configure_tftpd {
diff --git a/lib/keystone b/lib/keystone
index 7a949cf..c33d466 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -357,13 +357,13 @@
function create_keystone_accounts {
# admin
- local admin_tenant=$(get_or_create_project "admin")
- local admin_user=$(get_or_create_user "admin" "$ADMIN_PASSWORD")
+ local admin_tenant=$(get_or_create_project "admin" default)
+ local admin_user=$(get_or_create_user "admin" "$ADMIN_PASSWORD" default)
local admin_role=$(get_or_create_role "admin")
get_or_add_user_project_role $admin_role $admin_user $admin_tenant
# Create service project/role
- get_or_create_project "$SERVICE_TENANT_NAME"
+ get_or_create_project "$SERVICE_TENANT_NAME" default
# Service role, so service users do not have to be admins
get_or_create_role service
@@ -382,12 +382,12 @@
local another_role=$(get_or_create_role "anotherrole")
# invisible tenant - admin can't see this one
- local invis_tenant=$(get_or_create_project "invisible_to_admin")
+ local invis_tenant=$(get_or_create_project "invisible_to_admin" default)
# demo
- local demo_tenant=$(get_or_create_project "demo")
+ local demo_tenant=$(get_or_create_project "demo" default)
local demo_user=$(get_or_create_user "demo" \
- "$ADMIN_PASSWORD" "demo@example.com")
+ "$ADMIN_PASSWORD" "default" "demo@example.com")
get_or_add_user_project_role $member_role $demo_user $demo_tenant
get_or_add_user_project_role $admin_role $admin_user $demo_tenant
@@ -426,7 +426,7 @@
function create_service_user {
local role=${2:-service}
- local user=$(get_or_create_user "$1" "$SERVICE_PASSWORD")
+ local user=$(get_or_create_user "$1" "$SERVICE_PASSWORD" default)
get_or_add_user_project_role "$role" "$user" "$SERVICE_TENANT_NAME"
}
diff --git a/lib/lvm b/lib/lvm
index 1fe2683..8afd543 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -78,7 +78,7 @@
}
-# _create_volume_group creates default volume group
+# _create_lvm_volume_group creates default volume group
#
# Usage: _create_lvm_volume_group() $vg $size
function _create_lvm_volume_group {
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 5681743..acc2851 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -463,6 +463,8 @@
fi
_configure_neutron_debug_command
+
+ iniset $NEUTRON_CONF DEFAULT api_workers "$API_WORKERS"
}
function create_nova_conf_neutron {
@@ -694,9 +696,10 @@
if is_ssl_enabled_service "neutron"; then
ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
fi
- if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port; do sleep 1; done"; then
- die $LINENO "Neutron did not start"
- fi
+
+ local testcmd="wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port"
+ test_with_retry "$testcmd" "Neutron did not start" $SERVICE_TIMEOUT
+
# Start proxy if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy '*' $Q_PORT $Q_HOST $Q_PORT_INT &
@@ -719,7 +722,7 @@
sudo ip addr del $IP dev $PUBLIC_INTERFACE
sudo ip addr add $IP dev $OVS_PHYSICAL_BRIDGE
done
- sudo route add -net $FIXED_RANGE gw $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
+ sudo ip route replace $FIXED_RANGE via $NETWORK_GATEWAY dev $OVS_PHYSICAL_BRIDGE
fi
fi
@@ -824,6 +827,10 @@
neutron_ovs_base_cleanup
fi
+ if [[ $Q_AGENT == "linuxbridge" ]]; then
+ neutron_lb_cleanup
+ fi
+
# delete all namespaces created by neutron
for ns in $(sudo ip netns list | grep -o -E '(qdhcp|qrouter|qlbaas|fip|snat)-[0-9a-f-]*'); do
sudo ip netns delete ${ns}
@@ -1260,16 +1267,26 @@
# This logic is specific to using the l3-agent for layer 3
if is_service_enabled q-l3; then
# Configure and enable public bridge
+ local ext_gw_interface="none"
if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
- local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+ ext_gw_interface=$(_neutron_get_ext_gw_interface)
+ elif [[ "$Q_AGENT" = "linuxbridge" ]]; then
+ # Search for the brq device the neutron router and network for $FIXED_RANGE
+ # will be using.
+ # e.x. brq3592e767-da for NET_ID 3592e767-da66-4bcb-9bec-cdb03cd96102
+ ext_gw_interface=brq${EXT_NET_ID:0:11}
+ fi
+ if [[ "$ext_gw_interface" != "none" ]]; then
local cidr_len=${FLOATING_RANGE#*/}
+ local testcmd="ip -o link | grep -q $ext_gw_interface"
+ test_with_retry "$testcmd" "$ext_gw_interface creation failed"
if [[ $(ip addr show dev $ext_gw_interface | grep -c $ext_gw_ip) == 0 && ( $Q_USE_PROVIDERNET_FOR_PUBLIC == "False" || $Q_USE_PUBLIC_VETH == "True" ) ]]; then
sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
sudo ip link set $ext_gw_interface up
fi
ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $8; }'`
die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
- sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
+ sudo ip route replace $FIXED_RANGE via $ROUTER_GW_IP
fi
_neutron_set_router_id
fi
@@ -1304,7 +1321,7 @@
# Configure interface for public bridge
sudo ip -6 addr add $ipv6_ext_gw_ip/$ipv6_cidr_len dev $ext_gw_interface
- sudo ip -6 route add $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
+ sudo ip -6 route replace $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
fi
_neutron_set_router_id
fi
@@ -1374,9 +1391,8 @@
local timeout_sec=$5
local probe_cmd = ""
probe_cmd=`_get_probe_cmd_prefix $from_net`
- if ! timeout $timeout_sec sh -c "while ! $probe_cmd ssh -o StrictHostKeyChecking=no -i $key_file ${user}@$ip echo success; do sleep 1; done"; then
- die $LINENO "server didn't become ssh-able!"
- fi
+ local testcmd="$probe_cmd ssh -o StrictHostKeyChecking=no -i $key_file ${user}@$ip echo success"
+ test_with_retry "$testcmd" "server $ip didn't become ssh-able" $timeout_sec
}
# Neutron 3rd party programs
diff --git a/lib/neutron_plugins/linuxbridge_agent b/lib/neutron_plugins/linuxbridge_agent
old mode 100644
new mode 100755
index b348af9..fefc1c3
--- a/lib/neutron_plugins/linuxbridge_agent
+++ b/lib/neutron_plugins/linuxbridge_agent
@@ -9,6 +9,20 @@
function neutron_lb_cleanup {
sudo brctl delbr $PUBLIC_BRIDGE
+
+ if [[ "$Q_ML2_TENANT_NETWORK_TYPE" = "vxlan" ]]; then
+ for port in $(sudo brctl show | grep -o -e [a-zA-Z\-]*tap[0-9a-f\-]* -e vxlan-[0-9a-f\-]*); do
+ sudo ip link delete $port
+ done
+ elif [[ "$Q_ML2_TENANT_NETWORK_TYPE" = "vlan" ]]; then
+ for port in $(sudo brctl show | grep -o -e [a-zA-Z\-]*tap[0-9a-f\-]* -e ${LB_PHYSICAL_INTERFACE}\.[0-9a-f\-]*); do
+ sudo ip link delete $port
+ done
+ fi
+ for bridge in $(sudo brctl show |grep -o -e brq[0-9a-f\-]*); do
+ sudo ip link set $bridge down
+ sudo brctl delbr $bridge
+ done
}
function is_neutron_ovs_base_plugin {
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index 1d24f3b..2a05e2d 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -59,7 +59,7 @@
OVS_BRIDGE_MAPPINGS=$PHYSICAL_NETWORK:$OVS_PHYSICAL_BRIDGE
# Configure bridge manually with physical interface as port for multi-node
- sudo ovs-vsctl --no-wait -- --may-exist add-br $OVS_PHYSICAL_BRIDGE
+ _neutron_ovs_base_add_bridge $OVS_PHYSICAL_BRIDGE
fi
if [[ "$OVS_BRIDGE_MAPPINGS" != "" ]]; then
iniset /$Q_PLUGIN_CONF_FILE ovs bridge_mappings $OVS_BRIDGE_MAPPINGS
@@ -92,7 +92,7 @@
# Set up domU's L2 agent:
# Create a bridge "br-$GUEST_INTERFACE_DEFAULT"
- sudo ovs-vsctl --no-wait -- --may-exist add-br "br-$GUEST_INTERFACE_DEFAULT"
+ _neutron_ovs_base_add_bridge "br-$GUEST_INTERFACE_DEFAULT"
# Add $GUEST_INTERFACE_DEFAULT to that bridge
sudo ovs-vsctl add-port "br-$GUEST_INTERFACE_DEFAULT" $GUEST_INTERFACE_DEFAULT
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 81561d3..4e750f0 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -15,13 +15,21 @@
return 0
}
+function _neutron_ovs_base_add_bridge {
+ local bridge=$1
+ local addbr_cmd="sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge"
+
+ if [ "$OVS_DATAPATH_TYPE" != "" ] ; then
+ addbr_cmd="$addbr_cmd -- set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}"
+ fi
+
+ $addbr_cmd
+}
+
function _neutron_ovs_base_setup_bridge {
local bridge=$1
neutron-ovs-cleanup
- sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge
- if [[ $OVS_DATAPATH_TYPE != "" ]]; then
- sudo ovs-vsctl set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}
- fi
+ _neutron_ovs_base_add_bridge $bridge
sudo ovs-vsctl --no-wait br-set-external-id $bridge bridge-id $bridge
}
@@ -92,7 +100,7 @@
sudo ip link set $Q_PUBLIC_VETH_EX up
sudo ip addr flush dev $Q_PUBLIC_VETH_EX
else
- sudo ovs-vsctl -- --may-exist add-br $PUBLIC_BRIDGE
+ _neutron_ovs_base_add_bridge $PUBLIC_BRIDGE
sudo ovs-vsctl br-set-external-id $PUBLIC_BRIDGE bridge-id $PUBLIC_BRIDGE
fi
}
diff --git a/lib/neutron_plugins/services/loadbalancer b/lib/neutron_plugins/services/loadbalancer
index f465cc9..34190f9 100644
--- a/lib/neutron_plugins/services/loadbalancer
+++ b/lib/neutron_plugins/services/loadbalancer
@@ -42,7 +42,7 @@
function neutron_lbaas_stop {
pids=$(ps aux | awk '/haproxy/ { print $2 }')
- [ ! -z "$pids" ] && sudo kill $pids
+ [ ! -z "$pids" ] && sudo kill $pids || true
}
# Restore xtrace
diff --git a/lib/neutron_plugins/vmware_dvs b/lib/neutron_plugins/vmware_dvs
new file mode 100644
index 0000000..587d5a6
--- /dev/null
+++ b/lib/neutron_plugins/vmware_dvs
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# This file is needed so Q_PLUGIN=vmware_dvs will work.
+
+# FIXME(salv-orlando): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
+function has_neutron_plugin_security_group {
+ # 0 means True here
+ return 0
+}
diff --git a/lib/neutron_plugins/vmware_nsx_v3 b/lib/neutron_plugins/vmware_nsx_v3
new file mode 100644
index 0000000..6d8a6e6
--- /dev/null
+++ b/lib/neutron_plugins/vmware_nsx_v3
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+# This file is needed so Q_PLUGIN=vmware_nsx_v3 will work.
+
+# FIXME(salv-orlando): This function should not be here, but unfortunately
+# devstack calls it before the external plugins are fetched
+function has_neutron_plugin_security_group {
+ # 0 means True here
+ return 0
+}
diff --git a/lib/nova b/lib/nova
index da288d3..88b336a 100644
--- a/lib/nova
+++ b/lib/nova
@@ -53,6 +53,7 @@
NOVA_CELLS_CONF=$NOVA_CONF_DIR/nova-cells.conf
NOVA_FAKE_CONF=$NOVA_CONF_DIR/nova-fake.conf
NOVA_CELLS_DB=${NOVA_CELLS_DB:-nova_cell}
+NOVA_API_DB=${NOVA_API_DB:-nova_api}
NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
# NOVA_API_VERSION valid options
@@ -231,6 +232,10 @@
#if is_service_enabled n-cpu && [[ -r $NOVA_PLUGINS/hypervisor-$VIRT_DRIVER ]]; then
# cleanup_nova_hypervisor
#fi
+
+ if [ "$NOVA_USE_MOD_WSGI" == "True" ]; then
+ _cleanup_nova_apache_wsgi
+ fi
}
# _cleanup_nova_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
@@ -276,6 +281,7 @@
s|%SSLKEYFILE%|$nova_keyfile|g;
s|%USER%|$STACK_USER|g;
s|%VIRTUALENV%|$venv_path|g
+ s|%APIWORKERS%|$API_WORKERS|g
" -i $nova_apache_conf
sudo cp $FILES/apache-nova-ec2-api.template $nova_ec2_apache_conf
@@ -288,6 +294,7 @@
s|%SSLKEYFILE%|$nova_keyfile|g;
s|%USER%|$STACK_USER|g;
s|%VIRTUALENV%|$venv_path|g
+ s|%APIWORKERS%|$API_WORKERS|g
" -i $nova_ec2_apache_conf
}
@@ -471,6 +478,7 @@
iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
iniset $NOVA_CONF database connection `database_connection_url nova`
+ iniset $NOVA_CONF api_database connection `database_connection_url nova_api`
iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
iniset $NOVA_CONF osapi_v3 enabled "True"
@@ -489,6 +497,7 @@
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $NOVA_CONF DEFAULT osapi_compute_listen_port "$NOVA_SERVICE_PORT_INT"
+ iniset $NOVA_CONF DEFAULT osapi_compute_link_prefix $NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT
fi
configure_auth_token_middleware $NOVA_CONF nova $NOVA_AUTH_CACHE_DIR
@@ -674,6 +683,9 @@
if is_service_enabled n-cell; then
recreate_database $NOVA_CELLS_DB
fi
+
+ recreate_database $NOVA_API_DB
+ $NOVA_BIN_DIR/nova-manage api_db sync
fi
create_nova_cache_dir
@@ -755,8 +767,8 @@
enable_apache_site nova-api
enable_apache_site nova-ec2-api
restart_apache_server
- tail_log nova /var/log/$APACHE_NAME/nova-api.log
- tail_log nova /var/log/$APACHE_NAME/nova-ec2-api.log
+ tail_log nova-api /var/log/$APACHE_NAME/nova-api.log
+ tail_log nova-ec2-api /var/log/$APACHE_NAME/nova-ec2-api.log
else
run_process n-api "$NOVA_BIN_DIR/nova-api"
fi
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index 96d8a44..5525cfd 100755
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -28,16 +28,21 @@
else
install_package qemu-kvm
install_package libguestfs0
- install_package python-guestfs
fi
install_package libvirt-bin libvirt-dev
pip_install_gr libvirt-python
#pip_install_gr <there-si-no-guestfs-in-pypi>
elif is_fedora || is_suse; then
install_package kvm
+ # there is a dependency issue with kvm (which is really just a
+ # wrapper to qemu-system-x86) that leaves some bios files out,
+ # so install qemu-kvm (which shouldn't strictly be needed, as
+ # everything has been merged into qemu-system-x86) to bring in
+ # the right packages. see
+ # https://bugzilla.redhat.com/show_bug.cgi?id=1235890
+ install_package qemu-kvm
install_package libvirt libvirt-devel
pip_install_gr libvirt-python
- install_package python-libguestfs
fi
}
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index a6a87f9..f70b21a 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -26,7 +26,7 @@
# --------
# File injection is disabled by default in Nova. This will turn it back on.
-ENABLE_FILE_INJECTION=${ENABLE_FILE_INJECTION:-False}
+ENABLE_FILE_INJECTION=$(trueorfalse False ENABLE_FILE_INJECTION)
# Entry Points
@@ -60,7 +60,6 @@
iniset $NOVA_CONF DEFAULT vnc_enabled "false"
fi
- ENABLE_FILE_INJECTION=$(trueorfalse False ENABLE_FILE_INJECTION)
if [[ "$ENABLE_FILE_INJECTION" = "True" ]] ; then
# When libguestfs is available for file injection, enable using
# libguestfs to inspect the image and figure out the proper
@@ -97,6 +96,14 @@
yum_install libcgroup-tools
fi
fi
+
+ if [[ "$ENABLE_FILE_INJECTION" = "True" ]] ; then
+ if is_ubuntu; then
+ install_package python-guestfs
+ elif is_fedora || is_suse; then
+ install_package python-libguestfs
+ fi
+ fi
}
# start_nova_hypervisor - Start any required external services
diff --git a/lib/oslo b/lib/oslo
index d9688a0..123572c 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -22,8 +22,11 @@
# Defaults
# --------
+GITDIR["automaton"]=$DEST/automaton
GITDIR["cliff"]=$DEST/cliff
GITDIR["debtcollector"]=$DEST/debtcollector
+GITDIR["futurist"]=$DEST/futurist
+GITDIR["oslo.cache"]=$DEST/oslo.cache
GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
GITDIR["oslo.config"]=$DEST/oslo.config
GITDIR["oslo.context"]=$DEST/oslo.context
@@ -33,8 +36,10 @@
GITDIR["oslo.messaging"]=$DEST/oslo.messaging
GITDIR["oslo.middleware"]=$DEST/oslo.middleware
GITDIR["oslo.policy"]=$DEST/oslo.policy
+GITDIR["oslo.reports"]=$DEST/oslo.reports
GITDIR["oslo.rootwrap"]=$DEST/oslo.rootwrap
GITDIR["oslo.serialization"]=$DEST/oslo.serialization
+GITDIR["oslo.service"]=$DEST/oslo.service
GITDIR["oslo.utils"]=$DEST/oslo.utils
GITDIR["oslo.versionedobjects"]=$DEST/oslo.versionedobjects
GITDIR["oslo.vmware"]=$DEST/oslo.vmware
@@ -60,8 +65,11 @@
# install_oslo() - Collect source and prepare
function install_oslo {
+ _do_install_oslo_lib "automaton"
_do_install_oslo_lib "cliff"
_do_install_oslo_lib "debtcollector"
+ _do_install_oslo_lib "futurist"
+ _do_install_oslo_lib "oslo.cache"
_do_install_oslo_lib "oslo.concurrency"
_do_install_oslo_lib "oslo.config"
_do_install_oslo_lib "oslo.context"
@@ -71,8 +79,10 @@
_do_install_oslo_lib "oslo.messaging"
_do_install_oslo_lib "oslo.middleware"
_do_install_oslo_lib "oslo.policy"
+ _do_install_oslo_lib "oslo.reports"
_do_install_oslo_lib "oslo.rootwrap"
_do_install_oslo_lib "oslo.serialization"
+ _do_install_oslo_lib "oslo.service"
_do_install_oslo_lib "oslo.utils"
_do_install_oslo_lib "oslo.versionedobjects"
_do_install_oslo_lib "oslo.vmware"
diff --git a/lib/rpc_backend b/lib/rpc_backend
index 33ab03d..03eacd8 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -1,72 +1,32 @@
#!/bin/bash
#
# lib/rpc_backend
-# Interface for interactig with different RPC backends
+# Interface for installing RabbitMQ on the system
# Dependencies:
#
# - ``functions`` file
# - ``RABBIT_{HOST|PASSWORD|USERID}`` must be defined when RabbitMQ is used
-# - ``RPC_MESSAGING_PROTOCOL`` option for configuring the messaging protocol
# ``stack.sh`` calls the entry points in this order:
#
# - check_rpc_backend
# - install_rpc_backend
# - restart_rpc_backend
-# - iniset_rpc_backend
+# - iniset_rpc_backend (stable interface)
+#
+# Note: if implementing an out of tree plugin for an RPC backend, you
+# should install all services through normal plugin methods, then
+# redefine ``iniset_rpc_backend`` in your code. That's the one portion
+# of this file which is a standard interface.
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set +o xtrace
-RPC_MESSAGING_PROTOCOL=${RPC_MESSAGING_PROTOCOL:-0.9}
-
-# TODO(sdague): RPC backend selection is super wonky because we treat
-# messaging server as a service, which it really isn't for multi host
-QPID_HOST=${QPID_HOST:-}
-
-
# Functions
# ---------
-# Make sure we only have one rpc backend enabled.
-# Also check the specified rpc backend is available on your platform.
-function check_rpc_backend {
- local c svc
-
- local rpc_needed=1
- # We rely on the fact that filenames in lib/* match the service names
- # that can be passed as arguments to is_service_enabled.
- # We check for a call to iniset_rpc_backend in these files, meaning
- # the service needs a backend.
- rpc_candidates=$(grep -rl iniset_rpc_backend $TOP_DIR/lib/ | awk -F/ '{print $NF}')
- for c in ${rpc_candidates}; do
- if is_service_enabled $c; then
- rpc_needed=0
- break
- fi
- done
- local rpc_backend_cnt=0
- for svc in qpid zeromq rabbit; do
- is_service_enabled $svc &&
- (( rpc_backend_cnt++ )) || true
- done
- if [ "$rpc_backend_cnt" -gt 1 ]; then
- echo "ERROR: only one rpc backend may be enabled,"
- echo " set only one of 'rabbit', 'qpid', 'zeromq'"
- echo " via ENABLED_SERVICES."
- elif [ "$rpc_backend_cnt" == 0 ] && [ "$rpc_needed" == 0 ]; then
- echo "ERROR: at least one rpc backend must be enabled,"
- echo " set one of 'rabbit', 'qpid', 'zeromq'"
- echo " via ENABLED_SERVICES."
- fi
-
- if is_service_enabled qpid && ! qpid_is_supported; then
- die $LINENO "Qpid support is not available for this version of your distribution."
- fi
-}
-
# clean up after rpc backend - eradicate all traces so changing backends
# produces a clean switch
function cleanup_rpc_backend {
@@ -79,110 +39,14 @@
# And the Erlang runtime too
apt_get purge -y erlang*
fi
- elif is_service_enabled qpid; then
- if is_fedora; then
- uninstall_package qpid-cpp-server
- elif is_ubuntu; then
- uninstall_package qpidd
- else
- exit_distro_not_supported "qpid installation"
- fi
- elif is_service_enabled zeromq; then
- if is_fedora; then
- uninstall_package zeromq python-zmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- uninstall_package redis python-redis
- fi
- elif is_ubuntu; then
- uninstall_package libzmq1 python-zmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- uninstall_package redis-server python-redis
- fi
- elif is_suse; then
- uninstall_package libzmq1 python-pyzmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- uninstall_package redis python-redis
- fi
- else
- exit_distro_not_supported "zeromq installation"
- fi
- fi
-
- # Remove the AMQP 1.0 messaging libraries
- if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
- if is_fedora; then
- uninstall_package qpid-proton-c-devel
- uninstall_package python-qpid-proton
- fi
- # TODO(kgiusti) ubuntu cleanup
fi
}
# install rpc backend
function install_rpc_backend {
- # Regardless of the broker used, if AMQP 1.0 is configured load
- # the necessary messaging client libraries for oslo.messaging
- if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
- if is_fedora; then
- install_package qpid-proton-c-devel
- install_package python-qpid-proton
- elif is_ubuntu; then
- # TODO(kgiusti) The QPID AMQP 1.0 protocol libraries
- # are not yet in the ubuntu repos. Enable these installs
- # once they are present:
- #install_package libqpid-proton2-dev
- #install_package python-qpid-proton
- # Also add 'uninstall' directives in cleanup_rpc_backend()!
- exit_distro_not_supported "QPID AMQP 1.0 Proton libraries"
- else
- exit_distro_not_supported "QPID AMQP 1.0 Proton libraries"
- fi
- # Install pyngus client API
- # TODO(kgiusti) can remove once python qpid bindings are
- # available on all supported platforms _and_ pyngus is added
- # to the requirements.txt file in oslo.messaging
- pip_install_gr pyngus
- fi
-
if is_service_enabled rabbit; then
# Install rabbitmq-server
install_package rabbitmq-server
- elif is_service_enabled qpid; then
- if is_fedora; then
- install_package qpid-cpp-server
- elif is_ubuntu; then
- install_package qpidd
- else
- exit_distro_not_supported "qpid installation"
- fi
- _configure_qpid
- elif is_service_enabled zeromq; then
- if is_fedora; then
- install_package zeromq python-zmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- install_package redis python-redis
- fi
- elif is_ubuntu; then
- install_package libzmq1 python-zmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- install_package redis-server python-redis
- fi
- elif is_suse; then
- install_package libzmq1 python-pyzmq
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- install_package redis python-redis
- fi
- else
- exit_distro_not_supported "zeromq installation"
- fi
- # Necessary directory for socket location.
- sudo mkdir -p /var/run/openstack
- sudo chown $STACK_USER /var/run/openstack
- fi
-
- # If using the QPID broker, install the QPID python client API
- if is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
- install_package python-qpid
fi
}
@@ -232,17 +96,12 @@
sudo rabbitmqctl set_permissions -p child_cell $RABBIT_USERID ".*" ".*" ".*"
fi
fi
- elif is_service_enabled qpid; then
- echo_summary "Starting qpid"
- restart_service qpidd
fi
}
# builds transport url string
function get_transport_url {
- if is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
- echo "qpid://$QPID_USERNAME:$QPID_PASSWORD@$QPID_HOST:5672/"
- elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+ if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
echo "rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672/"
fi
}
@@ -252,29 +111,7 @@
local package=$1
local file=$2
local section=${3:-DEFAULT}
- if is_service_enabled zeromq; then
- iniset $file $section rpc_backend "zmq"
- iniset $file $section rpc_zmq_host `hostname`
- if [ "$ZEROMQ_MATCHMAKER" == "redis" ]; then
- iniset $file $section rpc_zmq_matchmaker "redis"
- MATCHMAKER_REDIS_HOST=${MATCHMAKER_REDIS_HOST:-127.0.0.1}
- iniset $file matchmaker_redis host $MATCHMAKER_REDIS_HOST
- else
- die $LINENO "Other matchmaker drivers not supported"
- fi
- elif is_service_enabled qpid || [ -n "$QPID_HOST" ]; then
- # For Qpid use the 'amqp' oslo.messaging transport when AMQP 1.0 is used
- if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
- iniset $file $section rpc_backend "amqp"
- else
- iniset $file $section rpc_backend "qpid"
- fi
- iniset $file $section qpid_hostname ${QPID_HOST:-$SERVICE_HOST}
- if [ -n "$QPID_USERNAME" ]; then
- iniset $file $section qpid_username $QPID_USERNAME
- iniset $file $section qpid_password $QPID_PASSWORD
- fi
- elif is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
+ if is_service_enabled rabbit || { [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; }; then
iniset $file $section rpc_backend "rabbit"
iniset $file oslo_messaging_rabbit rabbit_hosts $RABBIT_HOST
iniset $file oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
@@ -288,17 +125,6 @@
fi
}
-# Check if qpid can be used on the current distro.
-# qpid_is_supported
-function qpid_is_supported {
- if [[ -z "$DISTRO" ]]; then
- GetDistro
- fi
-
- # Qpid is not in openSUSE
- ( ! is_suse )
-}
-
function rabbit_setuser {
local user="$1" pass="$2" found="" out=""
out=$(sudo rabbitmqctl list_users) ||
@@ -314,85 +140,6 @@
sudo rabbitmqctl set_permissions "$user" ".*" ".*" ".*"
}
-# Set up the various configuration files used by the qpidd broker
-function _configure_qpid {
-
- # the location of the configuration files have changed since qpidd 0.14
- local qpid_conf_file
- if [ -e /etc/qpid/qpidd.conf ]; then
- qpid_conf_file=/etc/qpid/qpidd.conf
- elif [ -e /etc/qpidd.conf ]; then
- qpid_conf_file=/etc/qpidd.conf
- else
- exit_distro_not_supported "qpidd.conf file not found!"
- fi
-
- # force the ACL file to a known location
- local qpid_acl_file=/etc/qpid/qpidd.acl
- if [ ! -e $qpid_acl_file ]; then
- sudo mkdir -p -m 755 `dirname $qpid_acl_file`
- sudo touch $qpid_acl_file
- sudo chmod o+r $qpid_acl_file
- fi
- sudo sed -i.bak '/^acl-file=/d' $qpid_conf_file
- echo "acl-file=$qpid_acl_file" | sudo tee --append $qpid_conf_file
-
- sudo sed -i '/^auth=/d' $qpid_conf_file
- if [ -z "$QPID_USERNAME" ]; then
- # no QPID user configured, so disable authentication
- # and access control
- echo "auth=no" | sudo tee --append $qpid_conf_file
- cat <<EOF | sudo tee $qpid_acl_file
-acl allow all all
-EOF
- else
- # Configure qpidd to use PLAIN authentication, and add
- # QPID_USERNAME to the ACL:
- echo "auth=yes" | sudo tee --append $qpid_conf_file
- if [ -z "$QPID_PASSWORD" ]; then
- read_password QPID_PASSWORD "ENTER A PASSWORD FOR QPID USER $QPID_USERNAME"
- fi
- # Create ACL to allow $QPID_USERNAME full access
- cat <<EOF | sudo tee $qpid_acl_file
-group admin ${QPID_USERNAME}@QPID
-acl allow admin all
-acl deny all all
-EOF
- # Add user to SASL database
- if is_ubuntu; then
- install_package sasl2-bin
- elif is_fedora; then
- install_package cyrus-sasl-lib
- install_package cyrus-sasl-plain
- fi
- local sasl_conf_file=/etc/sasl2/qpidd.conf
- sudo sed -i.bak '/PLAIN/!s/mech_list: /mech_list: PLAIN /' $sasl_conf_file
- local sasl_db=`sudo grep sasldb_path $sasl_conf_file | cut -f 2 -d ":" | tr -d [:blank:]`
- if [ ! -e $sasl_db ]; then
- sudo mkdir -p -m 755 `dirname $sasl_db`
- fi
- echo $QPID_PASSWORD | sudo saslpasswd2 -c -p -f $sasl_db -u QPID $QPID_USERNAME
- sudo chmod o+r $sasl_db
- fi
-
- # If AMQP 1.0 is specified, ensure that the version of the
- # broker can support AMQP 1.0 and configure the queue and
- # topic address patterns used by oslo.messaging.
- if [ "$RPC_MESSAGING_PROTOCOL" == "AMQP1" ]; then
- QPIDD=$(type -p qpidd)
- if ! $QPIDD --help | grep -q "queue-patterns"; then
- exit_distro_not_supported "qpidd with AMQP 1.0 support"
- fi
- if ! grep -q "queue-patterns=exclusive" $qpid_conf_file; then
- cat <<EOF | sudo tee --append $qpid_conf_file
-queue-patterns=exclusive
-queue-patterns=unicast
-topic-patterns=broadcast
-EOF
- fi
- fi
-}
-
# Restore xtrace
$XTRACE
diff --git a/lib/sahara b/lib/sahara
deleted file mode 100644
index 51e431a..0000000
--- a/lib/sahara
+++ /dev/null
@@ -1,259 +0,0 @@
-#!/bin/bash
-#
-# lib/sahara
-
-# Dependencies:
-# ``functions`` file
-# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
-
-# ``stack.sh`` calls the entry points in this order:
-#
-# install_sahara
-# install_python_saharaclient
-# configure_sahara
-# sahara_register_images
-# start_sahara
-# stop_sahara
-# cleanup_sahara
-
-# Save trace setting
-XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-
-# Defaults
-# --------
-
-# Set up default repos
-
-# Set up default directories
-GITDIR["python-saharaclient"]=$DEST/python-saharaclient
-SAHARA_DIR=$DEST/sahara
-
-SAHARA_CONF_DIR=${SAHARA_CONF_DIR:-/etc/sahara}
-SAHARA_CONF_FILE=${SAHARA_CONF_DIR}/sahara.conf
-
-if is_ssl_enabled_service "sahara" || is_service_enabled tls-proxy; then
- SAHARA_SERVICE_PROTOCOL="https"
-fi
-SAHARA_SERVICE_HOST=${SAHARA_SERVICE_HOST:-$SERVICE_HOST}
-SAHARA_SERVICE_PORT=${SAHARA_SERVICE_PORT:-8386}
-SAHARA_SERVICE_PORT_INT=${SAHARA_SERVICE_PORT_INT:-18386}
-SAHARA_SERVICE_PROTOCOL=${SAHARA_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
-
-SAHARA_AUTH_CACHE_DIR=${SAHARA_AUTH_CACHE_DIR:-/var/cache/sahara}
-
-SAHARA_ENABLED_PLUGINS=${SAHARA_ENABLED_PLUGINS:-vanilla,hdp,cdh,spark,fake}
-
-# Support entry points installation of console scripts
-if [[ -d $SAHARA_DIR/bin ]]; then
- SAHARA_BIN_DIR=$SAHARA_DIR/bin
-else
- SAHARA_BIN_DIR=$(get_python_exec_prefix)
-fi
-
-# Tell Tempest this project is present
-TEMPEST_SERVICES+=,sahara
-
-# Functions
-# ---------
-
-# create_sahara_accounts() - Set up common required sahara accounts
-#
-# Tenant User Roles
-# ------------------------------
-# service sahara admin
-function create_sahara_accounts {
-
- create_service_user "sahara"
-
- if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
-
- # TODO: remove "data_processing" service when #1356053 will be fixed
- local sahara_service_old=$(openstack service create \
- "data_processing" \
- --name "sahara" \
- --description "Sahara Data Processing" \
- -f value -c id
- )
- local sahara_service_new=$(openstack service create \
- "data-processing" \
- --name "sahara" \
- --description "Sahara Data Processing" \
- -f value -c id
- )
- get_or_create_endpoint $sahara_service_old \
- "$REGION_NAME" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s"
- get_or_create_endpoint $sahara_service_new \
- "$REGION_NAME" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s"
- fi
-}
-
-# cleanup_sahara() - Remove residual data files, anything left over from
-# previous runs that would need to clean up.
-function cleanup_sahara {
-
- # Cleanup auth cache dir
- sudo rm -rf $SAHARA_AUTH_CACHE_DIR
-}
-
-# configure_sahara() - Set config files, create data dirs, etc
-function configure_sahara {
- sudo install -d -o $STACK_USER $SAHARA_CONF_DIR
-
- if [[ -f $SAHARA_DIR/etc/sahara/policy.json ]]; then
- cp -p $SAHARA_DIR/etc/sahara/policy.json $SAHARA_CONF_DIR
- fi
-
- # Create auth cache dir
- sudo install -d -o $STACK_USER -m 700 $SAHARA_AUTH_CACHE_DIR
- rm -rf $SAHARA_AUTH_CACHE_DIR/*
-
- configure_auth_token_middleware $SAHARA_CONF_FILE sahara $SAHARA_AUTH_CACHE_DIR
-
- iniset_rpc_backend sahara $SAHARA_CONF_FILE DEFAULT
-
- # Set configuration to send notifications
-
- if is_service_enabled ceilometer; then
- iniset $SAHARA_CONF_FILE DEFAULT enable_notifications "true"
- iniset $SAHARA_CONF_FILE DEFAULT notification_driver "messaging"
- fi
-
- iniset $SAHARA_CONF_FILE DEFAULT verbose True
- iniset $SAHARA_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
-
- iniset $SAHARA_CONF_FILE DEFAULT plugins $SAHARA_ENABLED_PLUGINS
-
- iniset $SAHARA_CONF_FILE database connection `database_connection_url sahara`
-
- if is_service_enabled neutron; then
- iniset $SAHARA_CONF_FILE DEFAULT use_neutron true
-
- if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE neutron ca_file $SSL_BUNDLE_FILE
- fi
- else
- iniset $SAHARA_CONF_FILE DEFAULT use_neutron false
- fi
-
- if is_service_enabled heat; then
- iniset $SAHARA_CONF_FILE DEFAULT infrastructure_engine heat
-
- if is_ssl_enabled_service "heat" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE heat ca_file $SSL_BUNDLE_FILE
- fi
- else
- iniset $SAHARA_CONF_FILE DEFAULT infrastructure_engine direct
- fi
-
- if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE cinder ca_file $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE nova ca_file $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "swift" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE swift ca_file $SSL_BUNDLE_FILE
- fi
-
- if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
- iniset $SAHARA_CONF_FILE keystone ca_file $SSL_BUNDLE_FILE
- fi
-
- # Register SSL certificates if provided
- if is_ssl_enabled_service sahara; then
- ensure_certificates SAHARA
-
- iniset $SAHARA_CONF_FILE ssl cert_file "$SAHARA_SSL_CERT"
- iniset $SAHARA_CONF_FILE ssl key_file "$SAHARA_SSL_KEY"
- fi
-
- iniset $SAHARA_CONF_FILE DEFAULT use_syslog $SYSLOG
-
- # Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
- setup_colorized_logging $SAHARA_CONF_FILE DEFAULT
- fi
-
- if is_service_enabled tls-proxy; then
- # Set the service port for a proxy to take the original
- iniset $SAHARA_CONF_FILE DEFAULT port $SAHARA_SERVICE_PORT_INT
- fi
-
- recreate_database sahara
- $SAHARA_BIN_DIR/sahara-db-manage --config-file $SAHARA_CONF_FILE upgrade head
-}
-
-# install_sahara() - Collect source and prepare
-function install_sahara {
- git_clone $SAHARA_REPO $SAHARA_DIR $SAHARA_BRANCH
- setup_develop $SAHARA_DIR
-}
-
-# install_python_saharaclient() - Collect source and prepare
-function install_python_saharaclient {
- if use_library_from_git "python-saharaclient"; then
- git_clone_by_name "python-saharaclient"
- setup_dev_lib "python-saharaclient"
- fi
-}
-
-# sahara_register_images() - Registers images in sahara image registry
-function sahara_register_images {
- if is_service_enabled heat && [[ ! -z "$HEAT_CFN_IMAGE_URL" ]]; then
- # Register heat image for Fake plugin
- local fake_plugin_properties="--property _sahara_tag_0.1=True"
- fake_plugin_properties+=" --property _sahara_tag_fake=True"
- fake_plugin_properties+=" --property _sahara_username=fedora"
- openstack --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image set $(basename "$HEAT_CFN_IMAGE_URL" ".qcow2") $fake_plugin_properties
- fi
-}
-
-# start_sahara() - Start running processes, including screen
-function start_sahara {
- local service_port=$SAHARA_SERVICE_PORT
- local service_protocol=$SAHARA_SERVICE_PROTOCOL
- if is_service_enabled tls-proxy; then
- service_port=$SAHARA_SERVICE_PORT_INT
- service_protocol="http"
- fi
-
- run_process sahara "$SAHARA_BIN_DIR/sahara-all --config-file $SAHARA_CONF_FILE"
- run_process sahara-api "$SAHARA_BIN_DIR/sahara-api --config-file $SAHARA_CONF_FILE"
- run_process sahara-eng "$SAHARA_BIN_DIR/sahara-engine --config-file $SAHARA_CONF_FILE"
-
- echo "Waiting for Sahara to start..."
- if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$SAHARA_SERVICE_HOST:$service_port; then
- die $LINENO "Sahara did not start"
- fi
-
- # Start proxies if enabled
- if is_service_enabled tls-proxy; then
- start_tls_proxy '*' $SAHARA_SERVICE_PORT $SAHARA_SERVICE_HOST $SAHARA_SERVICE_PORT_INT &
- fi
-}
-
-# stop_sahara() - Stop running processes
-function stop_sahara {
- # Kill the Sahara screen windows
- stop_process sahara
- stop_process sahara-api
- stop_process sahara-eng
-}
-
-
-# Restore xtrace
-$XTRACE
-
-# Local variables:
-# mode: shell-script
-# End:
diff --git a/lib/swift b/lib/swift
index 820042d..5b73981 100644
--- a/lib/swift
+++ b/lib/swift
@@ -616,20 +616,23 @@
"$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s"
fi
- local swift_tenant_test1=$(get_or_create_project swifttenanttest1)
+ local swift_tenant_test1=$(get_or_create_project swifttenanttest1 default)
die_if_not_set $LINENO swift_tenant_test1 "Failure creating swift_tenant_test1"
- SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password "test@example.com")
+ SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password \
+ "default" "test@example.com")
die_if_not_set $LINENO SWIFT_USER_TEST1 "Failure creating SWIFT_USER_TEST1"
get_or_add_user_project_role admin $SWIFT_USER_TEST1 $swift_tenant_test1
- local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password "test3@example.com")
+ local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password \
+ "default" "test3@example.com")
die_if_not_set $LINENO swift_user_test3 "Failure creating swift_user_test3"
get_or_add_user_project_role $another_role $swift_user_test3 $swift_tenant_test1
- local swift_tenant_test2=$(get_or_create_project swifttenanttest2)
+ local swift_tenant_test2=$(get_or_create_project swifttenanttest2 default)
die_if_not_set $LINENO swift_tenant_test2 "Failure creating swift_tenant_test2"
- local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password "test2@example.com")
+ local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password \
+ "default" "test2@example.com")
die_if_not_set $LINENO swift_user_test2 "Failure creating swift_user_test2"
get_or_add_user_project_role admin $swift_user_test2 $swift_tenant_test2
@@ -639,7 +642,8 @@
local swift_tenant_test4=$(get_or_create_project swifttenanttest4 $swift_domain)
die_if_not_set $LINENO swift_tenant_test4 "Failure creating swift_tenant_test4"
- local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password "test4@example.com" $swift_domain)
+ local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password \
+ $swift_domain "test4@example.com")
die_if_not_set $LINENO swift_user_test4 "Failure creating swift_user_test4"
get_or_add_user_project_role admin $swift_user_test4 $swift_tenant_test4
}
@@ -768,7 +772,7 @@
stop_process s-${type}
done
# Blast out any stragglers
- pkill -f swift-
+ pkill -f swift- || true
}
function swift_configure_tempurls {
diff --git a/lib/tempest b/lib/tempest
index 059709d..a84ade2 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -329,6 +329,10 @@
if [[ ! -z "$TEMPEST_HTTP_IMAGE" ]]; then
iniset $TEMPEST_CONFIG image http_image $TEMPEST_HTTP_IMAGE
fi
+ iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image true
+
+ # Image Features
+ iniset $TEMPEST_CONFIG image-feature-enabled deactivate_image True
# Auth
TEMPEST_ALLOW_TENANT_ISOLATION=${TEMPEST_ALLOW_TENANT_ISOLATION:-$TEMPEST_HAS_ADMIN}
@@ -375,6 +379,7 @@
iniset $TEMPEST_CONFIG compute-feature-enabled preserve_ports True
# TODO(gilliard): Remove the live_migrate_paused_instances flag when Juno is end of life.
iniset $TEMPEST_CONFIG compute-feature-enabled live_migrate_paused_instances True
+ iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume ${ATTACH_ENCRYPTED_VOLUME_AVAILABLE:-True}
# Network
iniset $TEMPEST_CONFIG network api_version 2.0
@@ -435,6 +440,7 @@
# Ceilometer API optimization happened in Juno that allows to run more tests in tempest.
# Once Tempest retires support for icehouse this flag can be removed.
iniset $TEMPEST_CONFIG telemetry too_slow_to_test "False"
+ iniset $TEMPEST_CONFIG telemetry-feature-enabled events "True"
# Object Store
local object_storage_api_extensions=${OBJECT_STORAGE_API_EXTENSIONS:-"all"}
@@ -447,6 +453,9 @@
iniset $TEMPEST_CONFIG object-storage-feature-enabled discoverable_apis $object_storage_api_extensions
# Volume
+ # TODO(dkranz): Remove the bootable flag when Juno is end of life.
+ iniset $TEMPEST_CONFIG volume-feature-enabled bootable True
+
local volume_api_extensions=${VOLUME_API_EXTENSIONS:-"all"}
if [[ ! -z "$DISABLE_VOLUME_API_EXTENSIONS" ]]; then
# Enabled extensions are either the ones explicitly specified or those available on the API endpoint
@@ -542,8 +551,8 @@
if is_service_enabled tempest; then
# Tempest has some tests that validate various authorization checks
# between two regular users in separate tenants
- get_or_create_project alt_demo
- get_or_create_user alt_demo "$ADMIN_PASSWORD" "alt_demo@example.com"
+ get_or_create_project alt_demo default
+ get_or_create_user alt_demo "$ADMIN_PASSWORD" "default" "alt_demo@example.com"
get_or_add_user_project_role Member alt_demo alt_demo
fi
}
diff --git a/lib/zaqar b/lib/zaqar
index 8d51910..891b0ea 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -128,10 +128,9 @@
configure_redis
fi
- if is_service_enabled qpid || [ -n "$RABBIT_HOST" ] && [ -n "$RABBIT_PASSWORD" ]; then
- iniset $ZAQAR_CONF DEFAULT notification_driver messaging
- iniset $ZAQAR_CONF DEFAULT control_exchange zaqar
- fi
+ iniset $ZAQAR_CONF DEFAULT notification_driver messaging
+ iniset $ZAQAR_CONF DEFAULT control_exchange zaqar
+
iniset_rpc_backend zaqar $ZAQAR_CONF
cleanup_zaqar
diff --git a/stack.sh b/stack.sh
index dc79fa9..17cbe75 100755
--- a/stack.sh
+++ b/stack.sh
@@ -173,7 +173,7 @@
# Warn users who aren't on an explicitly supported distro, but allow them to
# override check and attempt installation with ``FORCE=yes ./stack``
-if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f20|f21|f22|rhel7) ]]; then
+if [[ ! ${DISTRO} =~ (precise|trusty|utopic|vivid|7.0|wheezy|sid|testing|jessie|f21|f22|rhel7) ]]; then
echo "WARNING: this script has not been tested on $DISTRO"
if [[ "$FORCE" != "yes" ]]; then
die $LINENO "If you wish to run this script anyway run with FORCE=yes"
@@ -263,6 +263,7 @@
EOF
# Enable a bootstrap repo. It is removed after finishing
# the epel-release installation.
+ is_package_installed yum-utils || install_package yum-utils
sudo yum-config-manager --enable epel-bootstrap
yum_install epel-release || \
die $LINENO "Error installing EPEL repo, cannot continue"
@@ -270,7 +271,6 @@
sudo rm -f /etc/yum.repos.d/epel-bootstrap.repo
# ... and also optional to be enabled
- is_package_installed yum-utils || install_package yum-utils
sudo yum-config-manager --enable rhel-7-server-optional-rpms
RHEL_RDO_REPO_RPM=${RHEL7_RDO_REPO_RPM:-"https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm"}
@@ -500,12 +500,8 @@
source $TOP_DIR/lib/database
source $TOP_DIR/lib/rpc_backend
-# Make sure we only have one rpc backend enabled,
-# and the specified rpc backend is available on your platform.
-check_rpc_backend
-
# Service to enable with SSL if ``USE_SSL`` is True
-SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron,sahara"
+SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
if is_service_enabled tls-proxy && [ "$USE_SSL" == "True" ]; then
die $LINENO "tls-proxy and SSL are mutually exclusive"
@@ -669,6 +665,9 @@
fi
fi
+# Save configuration values
+save_stackenv $LINENO
+
# Install Packages
# ================
@@ -680,6 +679,9 @@
echo_summary "Installing package prerequisites"
source $TOP_DIR/tools/install_prereqs.sh
+# Normalise USE_CONSTRAINTS
+USE_CONSTRAINTS=$(trueorfalse False USE_CONSTRAINTS)
+
# Configure an appropriate Python environment
if [[ "$OFFLINE" != "True" ]]; then
PYPI_ALTERNATIVE_URL=${PYPI_ALTERNATIVE_URL:-""} $TOP_DIR/tools/install_pip.sh
@@ -950,6 +952,9 @@
# Initialize the directory for service status check
init_service_check
+# Save configuration values
+save_stackenv $LINENO
+
# Start Services
# ==============
@@ -1006,6 +1011,9 @@
# Begone token auth
unset OS_TOKEN OS_URL
+ # force set to use v2 identity authentication even with v3 commands
+ export OS_AUTH_TYPE=v2password
+
# Set up password auth credentials now that Keystone is bootstrapped
export OS_AUTH_URL=$SERVICE_ENDPOINT
export OS_TENANT_NAME=admin
@@ -1014,15 +1022,6 @@
export OS_REGION_NAME=$REGION_NAME
fi
-
-# ZeroMQ
-# ------
-if is_service_enabled zeromq; then
- echo_summary "Starting zeromq receiver"
- run_process zeromq "$OSLO_BIN_DIR/oslo-messaging-zmq-receiver"
-fi
-
-
# Horizon
# -------
@@ -1287,35 +1286,44 @@
# Save some values we generated for later use
-CURRENT_RUN_TIME=$(date "+$TIMESTAMP_FORMAT")
-echo "# $CURRENT_RUN_TIME" >$TOP_DIR/.stackenv
-for i in BASE_SQL_CONN ENABLED_SERVICES HOST_IP LOGFILE \
- SERVICE_HOST SERVICE_PROTOCOL STACK_USER TLS_IP KEYSTONE_AUTH_PROTOCOL OS_CACERT; do
- echo $i=${!i} >>$TOP_DIR/.stackenv
-done
+save_stackenv
-# Write out a clouds.yaml file
-# putting the location into a variable to allow for easier refactoring later
-# to make it overridable. There is current no usecase where doing so makes
-# sense, so I'm not actually doing it now.
+# Update/create user clouds.yaml file.
+# clouds.yaml will have
+# - A `devstack` entry for the `demo` user for the `demo` project.
+# - A `devstack-admin` entry for the `admin` user for the `admin` project.
+
+# The location is a variable to allow for easier refactoring later to make it
+# overridable. There is currently no usecase where doing so makes sense, so
+# it's not currently configurable.
CLOUDS_YAML=~/.config/openstack/clouds.yaml
-if [ ! -e $CLOUDS_YAML ]; then
- mkdir -p $(dirname $CLOUDS_YAML)
- cat >"$CLOUDS_YAML" <<EOF
-clouds:
- devstack:
- auth:
- auth_url: $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION
- username: demo
- project_name: demo
- password: $ADMIN_PASSWORD
- region_name: $REGION_NAME
- identity_api_version: $IDENTITY_API_VERSION
-EOF
- if [ -f "$SSL_BUNDLE_FILE" ]; then
- echo " cacert: $SSL_BUNDLE_FILE" >>"$CLOUDS_YAML"
- fi
+
+mkdir -p $(dirname $CLOUDS_YAML)
+
+CA_CERT_ARG=''
+if [ -f "$SSL_BUNDLE_FILE" ]; then
+ CA_CERT_ARG="--os-cacert $SSL_BUNDLE_FILE"
fi
+$TOP_DIR/tools/update_clouds_yaml.py \
+ --file $CLOUDS_YAML \
+ --os-cloud devstack \
+ --os-region-name $REGION_NAME \
+ --os-identity-api-version $IDENTITY_API_VERSION \
+ $CA_CERT_ARG \
+ --os-auth-url $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION \
+ --os-username demo \
+ --os-password $ADMIN_PASSWORD \
+ --os-project-name demo
+$TOP_DIR/tools/update_clouds_yaml.py \
+ --file $CLOUDS_YAML \
+ --os-cloud devstack-admin \
+ --os-region-name $REGION_NAME \
+ --os-identity-api-version $IDENTITY_API_VERSION \
+ $CA_CERT_ARG \
+ --os-auth-url $KEYSTONE_AUTH_URI/v$IDENTITY_API_VERSION \
+ --os-username admin \
+ --os-password $ADMIN_PASSWORD \
+ --os-project-name admin
# Wrapup configuration
diff --git a/stackrc b/stackrc
index f8add4b..9fb334a 100644
--- a/stackrc
+++ b/stackrc
@@ -149,17 +149,12 @@
# Zero disables timeouts
GIT_TIMEOUT=${GIT_TIMEOUT:-0}
-# Requirements enforcing mode
+# Constraints mode
+# - False (default) : update git projects dependencies from global-requirements.
#
-# - strict (default) : ensure all project requirements files match
-# what's in global requirements.
-#
-# - soft : enforce requirements on everything in
-# requirements/projects.txt, but do soft updates on all other
-# repositories (i.e. sync versions for requirements that are in g-r,
-# but pass through any extras)
-REQUIREMENTS_MODE=${REQUIREMENTS_MODE:-strict}
-
+# - True : use upper-constraints.txt to constrain versions of packages intalled
+# and do not edit projects at all.
+USE_CONSTRAINTS=${USE_CONSTRAINTS:-False}
# Repositories
# ------------
@@ -236,10 +231,6 @@
NOVA_REPO=${NOVA_REPO:-${GIT_BASE}/openstack/nova.git}
NOVA_BRANCH=${NOVA_BRANCH:-master}
-# data processing service
-SAHARA_REPO=${SAHARA_REPO:-${GIT_BASE}/openstack/sahara.git}
-SAHARA_BRANCH=${SAHARA_BRANCH:-master}
-
# object storage service
SWIFT_REPO=${SWIFT_REPO:-${GIT_BASE}/openstack/swift.git}
SWIFT_BRANCH=${SWIFT_BRANCH:-master}
@@ -301,10 +292,6 @@
GITREPO["python-novaclient"]=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
GITBRANCH["python-novaclient"]=${NOVACLIENT_BRANCH:-master}
-# python saharaclient
-GITREPO["python-saharaclient"]=${SAHARACLIENT_REPO:-${GIT_BASE}/openstack/python-saharaclient.git}
-GITBRANCH["python-saharaclient"]=${SAHARACLIENT_BRANCH:-master}
-
# python swift client library
GITREPO["python-swiftclient"]=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
GITBRANCH["python-swiftclient"]=${SWIFTCLIENT_BRANCH:-master}
@@ -326,10 +313,22 @@
GITREPO["cliff"]=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
GITBRANCH["cliff"]=${CLIFF_BRANCH:-master}
+# async framework/helpers
+GITREPO["futurist"]=${FUTURIST_REPO:-${GIT_BASE}/openstack/futurist.git}
+GITBRANCH["futurist"]=${FUTURIST_BRANCH:-master}
+
# debtcollector deprecation framework/helpers
GITREPO["debtcollector"]=${DEBTCOLLECTOR_REPO:-${GIT_BASE}/openstack/debtcollector.git}
GITBRANCH["debtcollector"]=${DEBTCOLLECTOR_BRANCH:-master}
+# helpful state machines
+GITREPO["automaton"]=${AUTOMATON_REPO:-${GIT_BASE}/openstack/automaton.git}
+GITBRANCH["automaton"]=${AUTOMATON_BRANCH:-master}
+
+# oslo.cache
+GITREPO["oslo.cache"]=${OSLOCACHE_REPO:-${GIT_BASE}/openstack/oslo.cache.git}
+GITBRANCH["oslo.cache"]=${OSLOCACHE_BRANCH:-master}
+
# oslo.concurrency
GITREPO["oslo.concurrency"]=${OSLOCON_REPO:-${GIT_BASE}/openstack/oslo.concurrency.git}
GITBRANCH["oslo.concurrency"]=${OSLOCON_BRANCH:-master}
@@ -366,6 +365,10 @@
GITREPO["oslo.policy"]=${OSLOPOLICY_REPO:-${GIT_BASE}/openstack/oslo.policy.git}
GITBRANCH["oslo.policy"]=${OSLOPOLICY_BRANCH:-master}
+# oslo.reports
+GITREPO["oslo.reports"]=${OSLOREPORTS_REPO:-${GIT_BASE}/openstack/oslo.reports.git}
+GITBRANCH["oslo.reports"]=${OSLOREPORTS_BRANCH:-master}
+
# oslo.rootwrap
GITREPO["oslo.rootwrap"]=${OSLORWRAP_REPO:-${GIT_BASE}/openstack/oslo.rootwrap.git}
GITBRANCH["oslo.rootwrap"]=${OSLORWRAP_BRANCH:-master}
@@ -374,6 +377,10 @@
GITREPO["oslo.serialization"]=${OSLOSERIALIZATION_REPO:-${GIT_BASE}/openstack/oslo.serialization.git}
GITBRANCH["oslo.serialization"]=${OSLOSERIALIZATION_BRANCH:-master}
+# oslo.service
+GITREPO["oslo.service"]=${OSLOSERVICE_REPO:-${GIT_BASE}/openstack/oslo.service.git}
+GITBRANCH["oslo.service"]=${OSLOSERVICE_BRANCH:-master}
+
# oslo.utils
GITREPO["oslo.utils"]=${OSLOUTILS_REPO:-${GIT_BASE}/openstack/oslo.utils.git}
GITBRANCH["oslo.utils"]=${OSLOUTILS_BRANCH:-master}
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 336a213..8dc3ba3 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -35,11 +35,12 @@
ALL_LIBS+=" oslo.messaging oslo.log cliff python-heatclient stevedore"
ALL_LIBS+=" python-cinderclient glance_store oslo.concurrency oslo.db"
ALL_LIBS+=" oslo.versionedobjects oslo.vmware keystonemiddleware"
-ALL_LIBS+=" oslo.serialization python-saharaclient django_openstack_auth"
+ALL_LIBS+=" oslo.serialization django_openstack_auth"
ALL_LIBS+=" python-openstackclient oslo.rootwrap oslo.i18n"
ALL_LIBS+=" python-ceilometerclient oslo.utils python-swiftclient"
ALL_LIBS+=" python-neutronclient tooz ceilometermiddleware oslo.policy"
-ALL_LIBS+=" debtcollector os-brick"
+ALL_LIBS+=" debtcollector os-brick automaton futurist oslo.service"
+ALL_LIBS+=" oslo.cache oslo.reports"
# Generate the above list with
# echo ${!GITREPO[@]}
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index 31258d1..4fff57f 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -126,6 +126,9 @@
# [4] http://docs.openstack.org/developer/devstack/guides/neutron.html
if is_package_installed firewalld; then
sudo systemctl disable firewalld
+ # The iptables service files are no longer included by default,
+ # at least on a baremetal Fedora 21 Server install.
+ install_package iptables-services
sudo systemctl enable iptables
sudo systemctl stop firewalld
sudo systemctl start iptables
diff --git a/tools/update_clouds_yaml.py b/tools/update_clouds_yaml.py
new file mode 100755
index 0000000..0862135
--- /dev/null
+++ b/tools/update_clouds_yaml.py
@@ -0,0 +1,95 @@
+#!/usr/bin/env python
+
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# Update the clouds.yaml file.
+
+
+import argparse
+import os.path
+
+import yaml
+
+
+class UpdateCloudsYaml(object):
+ def __init__(self, args):
+ if args.file:
+ self._clouds_path = args.file
+ self._create_directory = False
+ else:
+ self._clouds_path = os.path.expanduser(
+ '~/.config/openstack/clouds.yaml')
+ self._create_directory = True
+ self._clouds = {}
+
+ self._cloud = args.os_cloud
+ self._cloud_data = {
+ 'region_name': args.os_region_name,
+ 'identity_api_version': args.os_identity_api_version,
+ 'auth': {
+ 'auth_url': args.os_auth_url,
+ 'username': args.os_username,
+ 'password': args.os_password,
+ 'project_name': args.os_project_name,
+ },
+ }
+ if args.os_cacert:
+ self._cloud_data['cacert'] = args.os_cacert
+
+ def run(self):
+ self._read_clouds()
+ self._update_clouds()
+ self._write_clouds()
+
+ def _read_clouds(self):
+ try:
+ with open(self._clouds_path) as clouds_file:
+ self._clouds = yaml.load(clouds_file)
+ except IOError:
+ # The user doesn't have a clouds.yaml file.
+ print("The user clouds.yaml file didn't exist.")
+ self._clouds = {}
+
+ def _update_clouds(self):
+ self._clouds.setdefault('clouds', {})[self._cloud] = self._cloud_data
+
+ def _write_clouds(self):
+
+ if self._create_directory:
+ clouds_dir = os.path.dirname(self._clouds_path)
+ os.makedirs(clouds_dir)
+
+ with open(self._clouds_path, 'w') as clouds_file:
+ yaml.dump(self._clouds, clouds_file, default_flow_style=False)
+
+
+def main():
+ parser = argparse.ArgumentParser('Update clouds.yaml file.')
+ parser.add_argument('--file')
+ parser.add_argument('--os-cloud', required=True)
+ parser.add_argument('--os-region-name', default='RegionOne')
+ parser.add_argument('--os-identity-api-version', default='3')
+ parser.add_argument('--os-cacert')
+ parser.add_argument('--os-auth-url', required=True)
+ parser.add_argument('--os-username', required=True)
+ parser.add_argument('--os-password', required=True)
+ parser.add_argument('--os-project-name', required=True)
+
+ args = parser.parse_args()
+
+ update_clouds_yaml = UpdateCloudsYaml(args)
+ update_clouds_yaml.run()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/worlddump.py b/tools/worlddump.py
index d846f10..628a69f 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -23,6 +23,7 @@
import os.path
import sys
+from subprocess import Popen
def get_options():
parser = argparse.ArgumentParser(
@@ -46,7 +47,7 @@
print cmd
print "-" * len(cmd)
print
- print os.popen(cmd).read()
+ Popen(cmd, shell=True)
def _header(name):
@@ -106,6 +107,12 @@
_dump_cmd("sudo cat %s" % fullpath)
+def guru_meditation_report():
+ _header("nova-compute Guru Meditation Report")
+ _dump_cmd("kill -s USR1 `pgrep nova-compute`")
+ print "guru meditation report in nova-compute log"
+
+
def main():
opts = get_options()
fname = filename(opts.dir)
@@ -118,6 +125,7 @@
network_dump()
iptables_dump()
compute_consoles()
+ guru_meditation_report()
if __name__ == '__main__':
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 43a6ce8..be6c5ca 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -14,12 +14,12 @@
# Size of image
VDI_MB=${VDI_MB:-5000}
-# Devstack now contains many components. 3GB ram is not enough to prevent
+# Devstack now contains many components. 4GB ram is not enough to prevent
# swapping and memory fragmentation - the latter of which can cause failures
# such as blkfront failing to plug a VBD and lead to random test fails.
#
-# Set to 4GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 3GB for VMs
-OSDOMU_MEM_MB=4096
+# Set to 6GB so an 8GB XenServer VM can have a 1GB Dom0 and leave 1GB for VMs
+OSDOMU_MEM_MB=6144
OSDOMU_VDI_GB=8
# Network mapping. Specify bridge names or network names. Network names may
diff --git a/tox.ini b/tox.ini
index e3d19ce..788fea9 100644
--- a/tox.ini
+++ b/tox.ini
@@ -33,6 +33,10 @@
sphinx>=1.1.2,<1.2
pbr>=0.6,!=0.7,<1.0
oslosphinx
+ nwdiag
+ blockdiag
+ sphinxcontrib-blockdiag
+ sphinxcontrib-nwdiag
whitelist_externals = bash
setenv =
TOP_DIR={toxinidir}
diff --git a/unstack.sh b/unstack.sh
index f0da971..10e5958 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -187,5 +187,10 @@
fi
# BUG: maybe it doesn't exist? We should isolate this further down.
-clean_lvm_volume_group $DEFAULT_VOLUME_GROUP_NAME || /bin/true
-clean_lvm_filter
+# NOTE: Cinder automatically installs the lvm2 package, independently of the
+# enabled backends. So if Cinder is enabled, we are sure lvm (lvremove,
+# /etc/lvm/lvm.conf, etc.) is here.
+if is_service_enabled cinder; then
+ clean_lvm_volume_group $DEFAULT_VOLUME_GROUP_NAME || /bin/true
+ clean_lvm_filter
+fi