Merge "Add postgresql dev package as testonly neutron dep"
diff --git a/FUTURE.rst b/FUTURE.rst
new file mode 100644
index 0000000..11bea30
--- /dev/null
+++ b/FUTURE.rst
@@ -0,0 +1,113 @@
+=============
+ Quo Vadimus
+=============
+
+Where are we going?
+
+This is a document in Devstack to outline where we are headed in the
+future. The future might be near or far, but this is where we'd like
+to be.
+
+This is intended to help people contribute, because it will be a
+little clearer if a contribution takes us closer to or further away to
+our end game.
+
+==================
+ Default Services
+==================
+
+Devstack is designed as a development environment first. There are a
+lot of ways to compose the OpenStack services, but we do need one
+default.
+
+That should be the Compute Layer (currently Glance + Nova + Cinder +
+Neutron Core (not advanced services) + Keystone). It should be the
+base building block going forward, and the introduction point of
+people to OpenStack via Devstack.
+
+================
+ Service Howtos
+================
+
+Starting from the base building block all services included in
+OpenStack should have an overview page in the Devstack
+documentation. That should include the following:
+
+- A helpful high level overview of that service
+- What it depends on (both other OpenStack services and other system
+ components)
+- What new daemons are needed to be started, including where they
+ should live
+
+This provides a map for people doing multinode testing to understand
+what portions are control plane, which should live on worker nodes.
+
+Service how to pages will start with an ugly "This team has provided
+no information about this service" until someone does.
+
+===================
+ Included Services
+===================
+
+Devstack doesn't need to eat the world. Given the existence of the
+external devstack plugin architecture, the future direction is to move
+the bulk of the support code out of devstack itself and into external
+plugins.
+
+This will also promote a more clean separation between services.
+
+=============================
+ Included Backends / Drivers
+=============================
+
+Upstream Devstack should only include Open Source backends / drivers,
+it's intent is for Open Source development of OpenStack. Proprietary
+drivers should be supported via external plugins.
+
+Just being Open Source doesn't mean it should be in upstream Devstack
+if it's not required for base development of OpenStack
+components. When in doubt, external plugins should be used.
+
+========================================
+ OpenStack Services vs. System Services
+========================================
+
+ENABLED_SERVICES is currently entirely too overloaded. We should have
+a separation of actual OpenStack services that you have to run (n-cpu,
+g-api) and required backends like mysql and rabbitmq.
+
+===========================
+ Splitting up of Functions
+===========================
+
+The functions-common file has grown over time, and needs to be split
+up into smaller libraries that handle specific domains.
+
+======================
+ Testing of Functions
+======================
+
+Every function in a functions file should get tests. The devstack
+testing framework is young, but we do have some unit tests for the
+tree, and those should be enhanced.
+
+==============================
+ Not Co-Gating with the World
+==============================
+
+As projects spin up functional test jobs, Devstack should not be
+co-gated with every single one of those. The Devstack team has one of
+the fastest turn arounds for blocking bugs of any Open Stack
+project.
+
+Basic service validation should be included as part of Devstack
+installation to mitigate this.
+
+============================
+ Documenting all the things
+============================
+
+Devstack started off as an explanation as much as an install
+script. We would love contributions to that further enhance the
+comments and explanations about what is happening, even if it seems a
+little pedantic at times.
diff --git a/HACKING.rst b/HACKING.rst
index dcde141..b3c82a3 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -6,7 +6,7 @@
-------
DevStack is written in UNIX shell script. It uses a number of bash-isms
-and so is limited to Bash (version 3 and up) and compatible shells.
+and so is limited to Bash (version 4 and up) and compatible shells.
Shell script was chosen because it best illustrates the steps used to
set up and interact with OpenStack components.
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index fd9c736..a449f49 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -70,6 +70,18 @@
Q: Are there any differences between Ubuntu and Fedora support?
A: Neutron is not fully supported prior to Fedora 18 due lack of
OpenVSwitch packages.
+Q: Why can't I use another shell?
+ A: DevStack now uses some specific bash-ism that require Bash 4, such
+ as associative arrays. Simple compatibility patches have been accepted
+ in the past when they are not complex, at this point no additional
+ compatibility patches will be considered except for shells matching
+ the array functionality as it is very ingrained in the repo and project
+ management.
+Q: But, but, can't I test on OS/X?
+ A: Yes, even you, core developer who complained about this, needs to
+ install bash 4 via homebrew to keep running tests on OS/X. Get a Real
+ Operating System. (For most of you who don't know, I am refering to
+ myself.)
Operation and Configuration
===========================
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
new file mode 100644
index 0000000..2538c8d
--- /dev/null
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -0,0 +1,139 @@
+=======================================================
+Configure DevStack with KVM-based Nested Virtualization
+=======================================================
+
+When using virtualization technologies like KVM, one can take advantage
+of "Nested VMX" (i.e. the ability to run KVM on KVM) so that the VMs in
+cloud (Nova guests) can run relatively faster than with plain QEMU
+emulation.
+
+Kernels shipped with Linux distributions doesn't have this enabled by
+default. This guide outlines the configuration details to enable nested
+virtualization in KVM-based environments. And how to setup DevStack
+(that'll run in a VM) to take advantage of this.
+
+
+Nested Virtualization Configuration
+===================================
+
+Configure Nested KVM for Intel-based Machines
+---------------------------------------------
+
+Procedure to enable nested KVM virtualization on AMD-based machines.
+
+Check if the nested KVM Kernel parameter is enabled:
+
+::
+
+ cat /sys/module/kvm_intel/parameters/nested
+ N
+
+Temporarily remove the KVM intel Kernel module, enable nested
+virtualization to be persistent across reboots and add the Kernel
+module back:
+
+::
+
+ sudo rmmod kvm-intel
+ sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
+ sudo modprobe kvm-intel
+
+Ensure the Nested KVM Kernel module parameter for Intel is enabled on
+the host:
+
+::
+
+ cat /sys/module/kvm_intel/parameters/nested
+ Y
+
+ modinfo kvm_intel | grep nested
+ parm: nested:bool
+
+Start your VM, now it should have KVM capabilities -- you can verify
+that by ensuring `/dev/kvm` character device is present.
+
+
+Configure Nested KVM for AMD-based Machines
+--------------------------------------------
+
+Procedure to enable nested KVM virtualization on AMD-based machines.
+
+Check if the nested KVM Kernel parameter is enabled:
+
+::
+
+ cat /sys/module/kvm_amd/parameters/nested
+ 0
+
+
+Temporarily remove the KVM AMD Kernel module, enable nested
+virtualization to be persistent across reboots and add the Kernel module
+back:
+
+::
+
+ sudo rmmod kvm-amd
+ sudo sh -c "echo 'options amd nested=1' >> /etc/modprobe.d/dist.conf"
+ sudo modprobe kvm-amd
+
+Ensure the Nested KVM Kernel module parameter for AMD is enabled on the
+host:
+
+::
+
+ cat /sys/module/kvm_amd/parameters/nested
+ 1
+
+ modinfo kvm_amd | grep -i nested
+ parm: nested:int
+
+To make the above value persistent across reboots, add an entry in
+/etc/modprobe.ddist.conf so it looks as below::
+
+ cat /etc/modprobe.d/dist.conf
+ options kvm-amd nested=y
+
+
+Expose Virtualization Extensions to DevStack VM
+-----------------------------------------------
+
+Edit the VM's libvirt XML configuration via `virsh` utility:
+
+::
+
+ sudo virsh edit devstack-vm
+
+Add the below snippet to expose the host CPU features to the VM:
+
+::
+
+ <cpu mode='host-passthrough'>
+ </cpu>
+
+
+Ensure DevStack VM is Using KVM
+-------------------------------
+
+Before invoking ``stack.sh`` in the VM, ensure that KVM is enabled. This
+can be verified by checking for the presence of the file `/dev/kvm` in
+your VM. If it is present, DevStack will default to using the config
+attribute `virt_type = kvm` in `/etc/nova.conf`; otherwise, it'll fall
+back to `virt_type=qemu`, i.e. plain QEMU emulation.
+
+Optionally, to explicitly set the type of virtualization, to KVM, by the
+libvirt driver in Nova, the below config attribute can be used in
+DevStack's ``local.conf``:
+
+::
+
+ LIBVIRT_TYPE=kvm
+
+
+Once DevStack is configured succesfully, verify if the Nova instances
+are using KVM by noticing the QEMU CLI invoked by Nova is using the
+parameter `accel=kvm`, e.g.:
+
+::
+
+ ps -ef | grep -i qemu
+ root 29773 1 0 11:24 ? 00:00:00 /usr/bin/qemu-system-x86_64 -machine accel=kvm [. . .]
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 17e9b9e..70287a9 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -108,6 +108,7 @@
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
+ SERVICE_TOKEN=xyzpdqlazydog
Run DevStack:
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 0763fb8..0790d1e 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -66,6 +66,7 @@
guides/single-machine
guides/multinode-lab
guides/neutron
+ guides/devstack-with-nested-kvm
All-In-One Single VM
--------------------
@@ -94,6 +95,13 @@
This guide is meant for building lab environments with a dedicated
control node and multiple compute nodes.
+DevStack with KVM-based Nested Virtualization
+---------------------------------------------
+
+Procedure to setup :doc:`DevStack with KVM-based Nested Virtualization
+<guides/devstack-with-nested-kvm>`. With this setup, Nova instances
+will be more performant than with plain QEMU emulation.
+
DevStack Documentation
======================
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index d1f7377..8bb92ed 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -16,7 +16,7 @@
The script in ``extras.d`` is expected to be mostly a dispatcher to
functions in a ``lib/*`` script. The scripts are named with a
zero-padded two digits sequence number prefix to control the order that
-the scripts are called, and with a suffix of ``.sh``. DevSack reserves
+the scripts are called, and with a suffix of ``.sh``. DevStack reserves
for itself the sequence numbers 00 through 09 and 90 through 99.
Below is a template that shows handlers for the possible command-line
diff --git a/extras.d/70-tuskar.sh b/extras.d/70-tuskar.sh
index 6e26db2..aa8f46a 100644
--- a/extras.d/70-tuskar.sh
+++ b/extras.d/70-tuskar.sh
@@ -176,13 +176,8 @@
# create_tuskar_accounts() - Set up common required tuskar accounts
function create_tuskar_accounts {
- # migrated from files/keystone_data.sh
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
- local tuskar_user=$(get_or_create_user "tuskar" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $tuskar_user $service_tenant
+ create_service_user "tuskar" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/functions b/functions
index 5b3a8ea..2f976cf 100644
--- a/functions
+++ b/functions
@@ -13,6 +13,7 @@
# Include the common functions
FUNC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
source ${FUNC_DIR}/functions-common
+source ${FUNC_DIR}/inc/python
# Save trace setting
XTRACE=$(set +o | grep xtrace)
diff --git a/functions-common b/functions-common
index b92fa55..d3b3c0c 100644
--- a/functions-common
+++ b/functions-common
@@ -15,7 +15,6 @@
# - OpenStack Functions
# - Package Functions
# - Process Functions
-# - Python Functions
# - Service Functions
# - System Functions
#
@@ -860,17 +859,17 @@
}
# Gets or creates user
-# Usage: get_or_create_user <username> <password> <project> [<email> [<domain>]]
+# Usage: get_or_create_user <username> <password> [<email> [<domain>]]
function get_or_create_user {
- if [[ ! -z "$4" ]]; then
- local email="--email=$4"
+ if [[ ! -z "$3" ]]; then
+ local email="--email=$3"
else
local email=""
fi
local os_cmd="openstack"
local domain=""
- if [[ ! -z "$5" ]]; then
- domain="--domain=$5"
+ if [[ ! -z "$4" ]]; then
+ domain="--domain=$4"
os_cmd="$os_cmd --os-url=$KEYSTONE_SERVICE_URI_V3 --os-identity-api-version=3"
fi
# Gets user id
@@ -879,7 +878,6 @@
$os_cmd user create \
$1 \
--password "$2" \
- --project $3 \
$email \
$domain \
--or-show \
@@ -1208,7 +1206,7 @@
if is_ubuntu; then
apt_get purge "$@"
elif is_fedora; then
- sudo $YUM remove -y "$@" ||:
+ sudo ${YUM:-yum} remove -y "$@" ||:
elif is_suse; then
sudo zypper rm "$@"
else
@@ -1229,7 +1227,7 @@
# https://bugzilla.redhat.com/show_bug.cgi?id=965567
$sudo http_proxy=$http_proxy https_proxy=$https_proxy \
no_proxy=$no_proxy \
- $YUM install -y "$@" 2>&1 | \
+ ${YUM:-yum} install -y "$@" 2>&1 | \
awk '
BEGIN { fail=0 }
/No package/ { fail=1 }
@@ -1239,7 +1237,7 @@
# also ensure we catch a yum failure
if [[ ${PIPESTATUS[0]} != 0 ]]; then
- die $LINENO "$YUM install failure"
+ die $LINENO "${YUM:-yum} install failure"
fi
}
@@ -1590,204 +1588,6 @@
}
-# Python Functions
-# ================
-
-# Get the path to the pip command.
-# get_pip_command
-function get_pip_command {
- which pip || which pip-python
-
- if [ $? -ne 0 ]; then
- die $LINENO "Unable to find pip; cannot continue"
- fi
-}
-
-# Get the path to the direcotry where python executables are installed.
-# get_python_exec_prefix
-function get_python_exec_prefix {
- if is_fedora || is_suse; then
- echo "/usr/bin"
- else
- echo "/usr/local/bin"
- fi
-}
-
-# Wrapper for ``pip install`` to set cache and proxy environment variables
-# Uses globals ``OFFLINE``, ``TRACK_DEPENDS``, ``*_proxy``
-# pip_install package [package ...]
-function pip_install {
- local xtrace=$(set +o | grep xtrace)
- set +o xtrace
- local offline=${OFFLINE:-False}
- if [[ "$offline" == "True" || -z "$@" ]]; then
- $xtrace
- return
- fi
-
- if [[ -z "$os_PACKAGE" ]]; then
- GetOSVersion
- fi
- if [[ $TRACK_DEPENDS = True && ! "$@" =~ virtualenv ]]; then
- # TRACK_DEPENDS=True installation creates a circular dependency when
- # we attempt to install virtualenv into a virualenv, so we must global
- # that installation.
- source $DEST/.venv/bin/activate
- local cmd_pip=$DEST/.venv/bin/pip
- local sudo_pip="env"
- else
- local cmd_pip=$(get_pip_command)
- local sudo_pip="sudo -H"
- fi
-
- local pip_version=$(python -c "import pip; \
- print(pip.__version__.strip('.')[0])")
- if (( pip_version<6 )); then
- die $LINENO "Currently installed pip version ${pip_version} does not" \
- "meet minimum requirements (>=6)."
- fi
-
- $xtrace
- $sudo_pip \
- http_proxy=${http_proxy:-} \
- https_proxy=${https_proxy:-} \
- no_proxy=${no_proxy:-} \
- $cmd_pip install \
- $@
-
- INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
- if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
- local test_req="$@/test-requirements.txt"
- if [[ -e "$test_req" ]]; then
- $sudo_pip \
- http_proxy=${http_proxy:-} \
- https_proxy=${https_proxy:-} \
- no_proxy=${no_proxy:-} \
- $cmd_pip install \
- -r $test_req
- fi
- fi
-}
-
-# should we use this library from their git repo, or should we let it
-# get pulled in via pip dependencies.
-function use_library_from_git {
- local name=$1
- local enabled=1
- [[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
- return $enabled
-}
-
-# setup a library by name. If we are trying to use the library from
-# git, we'll do a git based install, otherwise we'll punt and the
-# library should be installed by a requirements pull from another
-# project.
-function setup_lib {
- local name=$1
- local dir=${GITDIR[$name]}
- setup_install $dir
-}
-
-# setup a library by name in editiable mode. If we are trying to use
-# the library from git, we'll do a git based install, otherwise we'll
-# punt and the library should be installed by a requirements pull from
-# another project.
-#
-# use this for non namespaced libraries
-function setup_dev_lib {
- local name=$1
- local dir=${GITDIR[$name]}
- setup_develop $dir
-}
-
-# this should be used if you want to install globally, all libraries should
-# use this, especially *oslo* ones
-function setup_install {
- local project_dir=$1
- setup_package_with_req_sync $project_dir
-}
-
-# this should be used for projects which run services, like all services
-function setup_develop {
- local project_dir=$1
- setup_package_with_req_sync $project_dir -e
-}
-
-# determine if a project as specified by directory is in
-# projects.txt. This will not be an exact match because we throw away
-# the namespacing when we clone, but it should be good enough in all
-# practical ways.
-function is_in_projects_txt {
- local project_dir=$1
- local project_name=$(basename $project_dir)
- return grep "/$project_name\$" $REQUIREMENTS_DIR/projects.txt >/dev/null
-}
-
-# ``pip install -e`` the package, which processes the dependencies
-# using pip before running `setup.py develop`
-#
-# Updates the dependencies in project_dir from the
-# openstack/requirements global list before installing anything.
-#
-# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``, ``UNDO_REQUIREMENTS``
-# setup_develop directory
-function setup_package_with_req_sync {
- local project_dir=$1
- local flags=$2
-
- # Don't update repo if local changes exist
- # Don't use buggy "git diff --quiet"
- # ``errexit`` requires us to trap the exit code when the repo is changed
- local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
-
- if [[ $update_requirements != "changed" ]]; then
- if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
- if is_in_projects_txt $project_dir; then
- (cd $REQUIREMENTS_DIR; \
- python update.py $project_dir)
- else
- # soft update projects not found in requirements project.txt
- (cd $REQUIREMENTS_DIR; \
- python update.py -s $project_dir)
- fi
- else
- (cd $REQUIREMENTS_DIR; \
- python update.py $project_dir)
- fi
- fi
-
- setup_package $project_dir $flags
-
- # We've just gone and possibly modified the user's source tree in an
- # automated way, which is considered bad form if it's a development
- # tree because we've screwed up their next git checkin. So undo it.
- #
- # However... there are some circumstances, like running in the gate
- # where we really really want the overridden version to stick. So provide
- # a variable that tells us whether or not we should UNDO the requirements
- # changes (this will be set to False in the OpenStack ci gate)
- if [ $UNDO_REQUIREMENTS = "True" ]; then
- if [[ $update_requirements != "changed" ]]; then
- (cd $project_dir && git reset --hard)
- fi
- fi
-}
-
-# ``pip install -e`` the package, which processes the dependencies
-# using pip before running `setup.py develop`
-# Uses globals ``STACK_USER``
-# setup_develop_no_requirements_update directory
-function setup_package {
- local project_dir=$1
- local flags=$2
-
- pip_install $flags $project_dir
- # ensure that further actions can do things like setup.py sdist
- if [[ "$flags" == "-e" ]]; then
- safe_chown -R $STACK_USER $1/*.egg-info
- fi
-}
-
# Plugin Functions
# =================
diff --git a/inc/python b/inc/python
new file mode 100644
index 0000000..0348cb3
--- /dev/null
+++ b/inc/python
@@ -0,0 +1,223 @@
+#!/bin/bash
+#
+# **inc/python** - Python-related functions
+#
+# Support for pip/setuptools interfaces and virtual environments
+#
+# External functions used:
+# - GetOSVersion
+# - is_fedora
+# - is_suse
+# - safe_chown
+
+# Save trace setting
+INC_PY_TRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Python Functions
+# ================
+
+# Get the path to the pip command.
+# get_pip_command
+function get_pip_command {
+ which pip || which pip-python
+
+ if [ $? -ne 0 ]; then
+ die $LINENO "Unable to find pip; cannot continue"
+ fi
+}
+
+# Get the path to the direcotry where python executables are installed.
+# get_python_exec_prefix
+function get_python_exec_prefix {
+ if is_fedora || is_suse; then
+ echo "/usr/bin"
+ else
+ echo "/usr/local/bin"
+ fi
+}
+
+# Wrapper for ``pip install`` to set cache and proxy environment variables
+# Uses globals ``INSTALL_TESTONLY_PACKAGES``, ``OFFLINE``, ``TRACK_DEPENDS``,
+# ``*_proxy``
+# pip_install package [package ...]
+function pip_install {
+ local xtrace=$(set +o | grep xtrace)
+ set +o xtrace
+ local offline=${OFFLINE:-False}
+ if [[ "$offline" == "True" || -z "$@" ]]; then
+ $xtrace
+ return
+ fi
+
+ if [[ -z "$os_PACKAGE" ]]; then
+ GetOSVersion
+ fi
+ if [[ $TRACK_DEPENDS = True && ! "$@" =~ virtualenv ]]; then
+ # TRACK_DEPENDS=True installation creates a circular dependency when
+ # we attempt to install virtualenv into a virualenv, so we must global
+ # that installation.
+ source $DEST/.venv/bin/activate
+ local cmd_pip=$DEST/.venv/bin/pip
+ local sudo_pip="env"
+ else
+ local cmd_pip=$(get_pip_command)
+ local sudo_pip="sudo -H"
+ fi
+
+ local pip_version=$(python -c "import pip; \
+ print(pip.__version__.strip('.')[0])")
+ if (( pip_version<6 )); then
+ die $LINENO "Currently installed pip version ${pip_version} does not" \
+ "meet minimum requirements (>=6)."
+ fi
+
+ $xtrace
+ $sudo_pip \
+ http_proxy=${http_proxy:-} \
+ https_proxy=${https_proxy:-} \
+ no_proxy=${no_proxy:-} \
+ $cmd_pip install \
+ $@
+
+ INSTALL_TESTONLY_PACKAGES=$(trueorfalse False INSTALL_TESTONLY_PACKAGES)
+ if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
+ local test_req="$@/test-requirements.txt"
+ if [[ -e "$test_req" ]]; then
+ $sudo_pip \
+ http_proxy=${http_proxy:-} \
+ https_proxy=${https_proxy:-} \
+ no_proxy=${no_proxy:-} \
+ $cmd_pip install \
+ -r $test_req
+ fi
+ fi
+}
+
+# should we use this library from their git repo, or should we let it
+# get pulled in via pip dependencies.
+function use_library_from_git {
+ local name=$1
+ local enabled=1
+ [[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
+ return $enabled
+}
+
+# setup a library by name. If we are trying to use the library from
+# git, we'll do a git based install, otherwise we'll punt and the
+# library should be installed by a requirements pull from another
+# project.
+function setup_lib {
+ local name=$1
+ local dir=${GITDIR[$name]}
+ setup_install $dir
+}
+
+# setup a library by name in editiable mode. If we are trying to use
+# the library from git, we'll do a git based install, otherwise we'll
+# punt and the library should be installed by a requirements pull from
+# another project.
+#
+# use this for non namespaced libraries
+function setup_dev_lib {
+ local name=$1
+ local dir=${GITDIR[$name]}
+ setup_develop $dir
+}
+
+# this should be used if you want to install globally, all libraries should
+# use this, especially *oslo* ones
+function setup_install {
+ local project_dir=$1
+ setup_package_with_req_sync $project_dir
+}
+
+# this should be used for projects which run services, like all services
+function setup_develop {
+ local project_dir=$1
+ setup_package_with_req_sync $project_dir -e
+}
+
+# determine if a project as specified by directory is in
+# projects.txt. This will not be an exact match because we throw away
+# the namespacing when we clone, but it should be good enough in all
+# practical ways.
+function is_in_projects_txt {
+ local project_dir=$1
+ local project_name=$(basename $project_dir)
+ return grep "/$project_name\$" $REQUIREMENTS_DIR/projects.txt >/dev/null
+}
+
+# ``pip install -e`` the package, which processes the dependencies
+# using pip before running `setup.py develop`
+#
+# Updates the dependencies in project_dir from the
+# openstack/requirements global list before installing anything.
+#
+# Uses globals ``TRACK_DEPENDS``, ``REQUIREMENTS_DIR``, ``UNDO_REQUIREMENTS``
+# setup_develop directory
+function setup_package_with_req_sync {
+ local project_dir=$1
+ local flags=$2
+
+ # Don't update repo if local changes exist
+ # Don't use buggy "git diff --quiet"
+ # ``errexit`` requires us to trap the exit code when the repo is changed
+ local update_requirements=$(cd $project_dir && git diff --exit-code >/dev/null || echo "changed")
+
+ if [[ $update_requirements != "changed" ]]; then
+ if [[ "$REQUIREMENTS_MODE" == "soft" ]]; then
+ if is_in_projects_txt $project_dir; then
+ (cd $REQUIREMENTS_DIR; \
+ python update.py $project_dir)
+ else
+ # soft update projects not found in requirements project.txt
+ (cd $REQUIREMENTS_DIR; \
+ python update.py -s $project_dir)
+ fi
+ else
+ (cd $REQUIREMENTS_DIR; \
+ python update.py $project_dir)
+ fi
+ fi
+
+ setup_package $project_dir $flags
+
+ # We've just gone and possibly modified the user's source tree in an
+ # automated way, which is considered bad form if it's a development
+ # tree because we've screwed up their next git checkin. So undo it.
+ #
+ # However... there are some circumstances, like running in the gate
+ # where we really really want the overridden version to stick. So provide
+ # a variable that tells us whether or not we should UNDO the requirements
+ # changes (this will be set to False in the OpenStack ci gate)
+ if [ $UNDO_REQUIREMENTS = "True" ]; then
+ if [[ $update_requirements != "changed" ]]; then
+ (cd $project_dir && git reset --hard)
+ fi
+ fi
+}
+
+# ``pip install -e`` the package, which processes the dependencies
+# using pip before running `setup.py develop`
+# Uses globals ``STACK_USER``
+# setup_develop_no_requirements_update directory
+function setup_package {
+ local project_dir=$1
+ local flags=$2
+
+ pip_install $flags $project_dir
+ # ensure that further actions can do things like setup.py sdist
+ if [[ "$flags" == "-e" ]]; then
+ safe_chown -R $STACK_USER $1/*.egg-info
+ fi
+}
+
+
+# Restore xtrace
+$INC_PY_TRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/ceilometer b/lib/ceilometer
index 5d5b987..f03bab2 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -105,14 +105,10 @@
# SERVICE_TENANT_NAME ceilometer ResellerAdmin (if Swift is enabled)
function create_ceilometer_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
# Ceilometer
if [[ "$ENABLED_SERVICES" =~ "ceilometer-api" ]]; then
- local ceilometer_user=$(get_or_create_user "ceilometer" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $ceilometer_user $service_tenant
+
+ create_service_user "ceilometer" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local ceilometer_service=$(get_or_create_service "ceilometer" \
@@ -190,6 +186,7 @@
iniset $CEILOMETER_CONF DEFAULT policy_file $CEILOMETER_CONF_DIR/policy.json
cp $CEILOMETER_DIR/etc/ceilometer/pipeline.yaml $CEILOMETER_CONF_DIR
+ cp $CEILOMETER_DIR/etc/ceilometer/event_pipeline.yaml $CEILOMETER_CONF_DIR
cp $CEILOMETER_DIR/etc/ceilometer/api_paste.ini $CEILOMETER_CONF_DIR
cp $CEILOMETER_DIR/etc/ceilometer/event_definitions.yaml $CEILOMETER_CONF_DIR
diff --git a/lib/ceph b/lib/ceph
index 77b5726..a6b8cc8 100644
--- a/lib/ceph
+++ b/lib/ceph
@@ -142,8 +142,8 @@
}
function cleanup_ceph_embedded {
- sudo pkill -f ceph-mon
- sudo pkill -f ceph-osd
+ sudo killall -w -9 ceph-mon
+ sudo killall -w -9 ceph-osd
sudo rm -rf ${CEPH_DATA_DIR}/*/*
if egrep -q ${CEPH_DATA_DIR} /proc/mounts; then
sudo umount ${CEPH_DATA_DIR}
diff --git a/lib/cinder b/lib/cinder
index 08f5874..12ba51e 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -330,15 +330,10 @@
# Migrated from keystone_data.sh
function create_cinder_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
# Cinder
if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
- local cinder_user=$(get_or_create_user "cinder" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $cinder_user $service_tenant
+ create_service_user "cinder" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -454,10 +449,7 @@
_configure_tgt_for_config_d
if is_ubuntu; then
sudo service tgt restart
- elif is_fedora; then
- # bypass redirection to systemctl during restart
- sudo /sbin/service --skip-redirect tgtd restart
- elif is_suse; then
+ elif is_fedora || is_suse; then
restart_service tgtd
else
# note for other distros: unstack.sh also uses the tgt/tgtd service
diff --git a/lib/glance b/lib/glance
index 8768761..0340c21 100644
--- a/lib/glance
+++ b/lib/glance
@@ -232,15 +232,13 @@
function create_glance_accounts {
if is_service_enabled g-api; then
- local glance_user=$(get_or_create_user "glance" \
- "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME)
- get_or_add_user_role service $glance_user $SERVICE_TENANT_NAME
+ create_service_user "glance"
# required for swift access
if is_service_enabled s-proxy; then
local glance_swift_user=$(get_or_create_user "glance-swift" \
- "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME "glance-swift@example.com")
+ "$SERVICE_PASSWORD" "glance-swift@example.com")
get_or_add_user_role "ResellerAdmin" $glance_swift_user $SERVICE_TENANT_NAME
fi
diff --git a/lib/heat b/lib/heat
index bbef08c..c102163 100644
--- a/lib/heat
+++ b/lib/heat
@@ -134,10 +134,6 @@
iniset $HEAT_CONF keystone_authtoken cafile $SSL_BUNDLE_FILE
iniset $HEAT_CONF keystone_authtoken signing_dir $HEAT_AUTH_CACHE_DIR
- if is_ssl_enabled_service "key"; then
- iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
- fi
-
# ec2authtoken
iniset $HEAT_CONF ec2authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
@@ -246,13 +242,7 @@
# create_heat_accounts() - Set up common required heat accounts
function create_heat_accounts {
- # migrated from files/keystone_data.sh
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
- local heat_user=$(get_or_create_user "heat" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $heat_user $service_tenant
+ create_service_user "heat" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/lib/ironic b/lib/ironic
index 2075a9c..921bcf1 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -358,16 +358,11 @@
# service ironic admin # if enabled
function create_ironic_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
# Ironic
if [[ "$ENABLED_SERVICES" =~ "ir-api" ]]; then
# Get ironic user if exists
- local ironic_user=$(get_or_create_user "ironic" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $ironic_user $service_tenant
+ create_service_user "ironic" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/lib/keystone b/lib/keystone
index afa7f00..79806b8 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -309,8 +309,9 @@
setup_colorized_logging $KEYSTONE_CONF DEFAULT
fi
+ iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
+
if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
- iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
# Eliminate the %(asctime)s.%(msecs)03d from the log format strings
iniset $KEYSTONE_CONF DEFAULT logging_context_format_string "%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
iniset $KEYSTONE_CONF DEFAULT logging_default_format_string "%(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s"
@@ -362,8 +363,7 @@
# admin
local admin_tenant=$(get_or_create_project "admin")
- local admin_user=$(get_or_create_user "admin" \
- "$ADMIN_PASSWORD" "$admin_tenant")
+ local admin_user=$(get_or_create_user "admin" "$ADMIN_PASSWORD")
local admin_role=$(get_or_create_role "admin")
get_or_add_user_role $admin_role $admin_user $admin_tenant
@@ -392,7 +392,7 @@
# demo
local demo_tenant=$(get_or_create_project "demo")
local demo_user=$(get_or_create_user "demo" \
- "$ADMIN_PASSWORD" "$demo_tenant" "demo@example.com")
+ "$ADMIN_PASSWORD" "demo@example.com")
get_or_add_user_role $member_role $demo_user $demo_tenant
get_or_add_user_role $admin_role $admin_user $demo_tenant
@@ -415,6 +415,20 @@
fi
}
+# Create a user that is capable of verifying keystone tokens for use with auth_token middleware.
+#
+# create_service_user <name> [role]
+#
+# The role defaults to the service role. It is allowed to be provided as optional as historically
+# a lot of projects have configured themselves with the admin or other role here if they are
+# using this user for other purposes beyond simply auth_token middleware.
+function create_service_user {
+ local role=${2:-service}
+
+ local user=$(get_or_create_user "$1" "$SERVICE_PASSWORD")
+ get_or_add_user_role "$role" "$user" "$SERVICE_TENANT_NAME"
+}
+
# Configure the service to use the auth token middleware.
#
# configure_auth_token_middleware conf_file admin_user signing_dir [section]
@@ -533,12 +547,8 @@
tail_log key /var/log/$APACHE_NAME/keystone.log
tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
else
- local EXTRA_PARAMS=""
- if [ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]; then
- EXTRA_PARAMS="--debug"
- fi
# Start Keystone in a screen window
- run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF $EXTRA_PARAMS"
+ run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF"
fi
echo "Waiting for keystone to start..."
diff --git a/lib/neutron b/lib/neutron
index 0fb8d00..15a5f00 100755
--- a/lib/neutron
+++ b/lib/neutron
@@ -10,24 +10,25 @@
# ``stack.sh`` calls the entry points in this order:
#
-# - install_neutron
-# - install_neutronclient
# - install_neutron_agent_packages
+# - install_neutronclient
+# - install_neutron
# - install_neutron_third_party
# - configure_neutron
# - init_neutron
# - configure_neutron_third_party
# - init_neutron_third_party
# - start_neutron_third_party
-# - create_neutron_cache_dir
# - create_nova_conf_neutron
# - start_neutron_service_and_check
+# - check_neutron_third_party_integration
# - start_neutron_agents
# - create_neutron_initial_network
# - setup_neutron_debug
#
# ``unstack.sh`` calls the entry points in this order:
#
+# - teardown_neutron_debug
# - stop_neutron
# - stop_neutron_third_party
# - cleanup_neutron
@@ -507,15 +508,9 @@
# Migrated from keystone_data.sh
function create_neutron_accounts {
-
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local service_role=$(openstack role list | awk "/ service / { print \$2 }")
-
if [[ "$ENABLED_SERVICES" =~ "q-svc" ]]; then
- local neutron_user=$(get_or_create_user "neutron" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $service_role $neutron_user $service_tenant
+ create_service_user "neutron"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -749,13 +744,21 @@
# stop_neutron() - Stop running processes (non-screen)
function stop_neutron {
if is_service_enabled q-dhcp; then
+ stop_process q-dhcp
pid=$(ps aux | awk '/[d]nsmasq.+interface=(tap|ns-)/ { print $2 }')
[ ! -z "$pid" ] && sudo kill -9 $pid
fi
+
+ stop_process q-svc
+ stop_process q-l3
+
if is_service_enabled q-meta; then
sudo pkill -9 -f neutron-ns-metadata-proxy || :
+ stop_process q-meta
fi
+ stop_process q-agt
+
if is_service_enabled q-lbaas; then
neutron_lbaas_stop
fi
diff --git a/lib/neutron_plugins/services/metering b/lib/neutron_plugins/services/metering
index 51123e2..37ba019 100644
--- a/lib/neutron_plugins/services/metering
+++ b/lib/neutron_plugins/services/metering
@@ -23,7 +23,7 @@
}
function neutron_metering_stop {
- :
+ stop_process q-metering
}
# Restore xtrace
diff --git a/lib/neutron_plugins/services/vpn b/lib/neutron_plugins/services/vpn
index 7e80b5b..5912eab 100644
--- a/lib/neutron_plugins/services/vpn
+++ b/lib/neutron_plugins/services/vpn
@@ -28,6 +28,7 @@
if [ -n "$pids" ]; then
sudo kill $pids
fi
+ stop_process q-vpn
}
# Restore xtrace
diff --git a/lib/nova b/lib/nova
index a4b1bb1..c760066 100644
--- a/lib/nova
+++ b/lib/nova
@@ -353,15 +353,10 @@
# SERVICE_TENANT_NAME nova ResellerAdmin (if Swift is enabled)
function create_nova_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
# Nova
if [[ "$ENABLED_SERVICES" =~ "n-api" ]]; then
- local nova_user=$(get_or_create_user "nova" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $nova_user $service_tenant
+ create_service_user "nova" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/lib/sahara b/lib/sahara
index 995935a..cb6ecc3 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -61,12 +61,7 @@
# service sahara admin
function create_sahara_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
- local sahara_user=$(get_or_create_user "sahara" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $sahara_user $service_tenant
+ create_service_user "sahara" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/lib/swift b/lib/swift
index e6e1212..d9f750c 100644
--- a/lib/swift
+++ b/lib/swift
@@ -601,13 +601,9 @@
KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
local another_role=$(openstack role list | awk "/ anotherrole / { print \$2 }")
- local swift_user=$(get_or_create_user "swift" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $swift_user $service_tenant
+ create_service_user "swift" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -622,33 +618,30 @@
local swift_tenant_test1=$(get_or_create_project swifttenanttest1)
die_if_not_set $LINENO swift_tenant_test1 "Failure creating swift_tenant_test1"
- SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password \
- "$swift_tenant_test1" "test@example.com")
+ SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $swiftusertest1_password "test@example.com")
die_if_not_set $LINENO SWIFT_USER_TEST1 "Failure creating SWIFT_USER_TEST1"
- get_or_add_user_role $admin_role $SWIFT_USER_TEST1 $swift_tenant_test1
+ get_or_add_user_role admin $SWIFT_USER_TEST1 $swift_tenant_test1
- local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password \
- "$swift_tenant_test1" "test3@example.com")
+ local swift_user_test3=$(get_or_create_user swiftusertest3 $swiftusertest3_password "test3@example.com")
die_if_not_set $LINENO swift_user_test3 "Failure creating swift_user_test3"
get_or_add_user_role $another_role $swift_user_test3 $swift_tenant_test1
local swift_tenant_test2=$(get_or_create_project swifttenanttest2)
die_if_not_set $LINENO swift_tenant_test2 "Failure creating swift_tenant_test2"
- local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password \
- "$swift_tenant_test2" "test2@example.com")
+ local swift_user_test2=$(get_or_create_user swiftusertest2 $swiftusertest2_password "test2@example.com")
die_if_not_set $LINENO swift_user_test2 "Failure creating swift_user_test2"
- get_or_add_user_role $admin_role $swift_user_test2 $swift_tenant_test2
+ get_or_add_user_role admin $swift_user_test2 $swift_tenant_test2
local swift_domain=$(get_or_create_domain swift_test 'Used for swift functional testing')
die_if_not_set $LINENO swift_domain "Failure creating swift_test domain"
local swift_tenant_test4=$(get_or_create_project swifttenanttest4 $swift_domain)
die_if_not_set $LINENO swift_tenant_test4 "Failure creating swift_tenant_test4"
- local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password \
- $swift_tenant_test4 "test4@example.com" $swift_domain)
+
+ local swift_user_test4=$(get_or_create_user swiftusertest4 $swiftusertest4_password "test4@example.com" $swift_domain)
die_if_not_set $LINENO swift_user_test4 "Failure creating swift_user_test4"
- get_or_add_user_role $admin_role $swift_user_test4 $swift_tenant_test4
+ get_or_add_user_role admin $swift_user_test4 $swift_tenant_test4
}
# init_swift() - Initialize rings
diff --git a/lib/tempest b/lib/tempest
index 1ae9457..86f30b4 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -502,7 +502,7 @@
# Tempest has some tests that validate various authorization checks
# between two regular users in separate tenants
get_or_create_project alt_demo
- get_or_create_user alt_demo "$ADMIN_PASSWORD" alt_demo "alt_demo@example.com"
+ get_or_create_user alt_demo "$ADMIN_PASSWORD" "alt_demo@example.com"
get_or_add_user_role Member alt_demo alt_demo
fi
}
diff --git a/lib/trove b/lib/trove
index 3249ce0..d32c776 100644
--- a/lib/trove
+++ b/lib/trove
@@ -79,14 +79,9 @@
# service trove admin # if enabled
function create_trove_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local service_role=$(openstack role list | awk "/ admin / { print \$2 }")
-
if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
- local trove_user=$(get_or_create_user "trove" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $service_role $trove_user $service_tenant
+ create_service_user "trove" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/lib/zaqar b/lib/zaqar
index dfa3452..8b560bb 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -215,12 +215,7 @@
}
function create_zaqar_accounts {
- local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
-
- local zaqar_user=$(get_or_create_user "zaqar" \
- "$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $ADMIN_ROLE $zaqar_user $service_tenant
+ create_service_user "zaqar" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
diff --git a/pkg/elasticsearch.sh b/pkg/elasticsearch.sh
new file mode 100755
index 0000000..15e1b2b
--- /dev/null
+++ b/pkg/elasticsearch.sh
@@ -0,0 +1,126 @@
+#!/bin/bash -xe
+
+# basic reference point for things like filecache
+#
+# TODO(sdague): once we have a few of these I imagine the download
+# step can probably be factored out to something nicer
+TOP_DIR=$(cd $(dirname "$0")/.. && pwd)
+FILES=$TOP_DIR/files
+source $TOP_DIR/functions
+
+# Package source and version, all pkg files are expected to have
+# something like this, as well as a way to override them.
+ELASTICSEARCH_VERSION=${ELASTICSEARCH_VERSION:-1.4.2}
+ELASTICSEARCH_BASEURL=${ELASTICSEARCH_BASEURL:-https://download.elasticsearch.org/elasticsearch/elasticsearch}
+
+# Elastic search actual implementation
+function wget_elasticsearch {
+ local file=${1}
+
+ if [ ! -f ${FILES}/${file} ]; then
+ wget $ELASTICSEARCH_BASEURL/${file} -O ${FILES}/${file}
+ fi
+
+ if [ ! -f ${FILES}/${file}.sha1.txt ]; then
+ wget $ELASTICSEARCH_BASEURL/${file}.sha1.txt -O ${FILES}/${file}.sha1.txt
+ fi
+
+ pushd ${FILES}; sha1sum ${file} > ${file}.sha1.gen; popd
+
+ if ! diff ${FILES}/${file}.sha1.gen ${FILES}/${file}.sha1.txt; then
+ echo "Invalid elasticsearch download. Could not install."
+ return 1
+ fi
+ return 0
+}
+
+function download_elasticsearch {
+ if is_ubuntu; then
+ wget_elasticsearch elasticsearch-${ELASTICSEARCH_VERSION}.deb
+ elif is_fedora; then
+ wget_elasticsearch elasticsearch-${ELASTICSEARCH_VERSION}.noarch.rpm
+ fi
+}
+
+function configure_elasticsearch {
+ # currently a no op
+ ::
+}
+
+function start_elasticsearch {
+ if is_ubuntu; then
+ sudo /etc/init.d/elasticsearch start
+ elif is_fedora; then
+ sudo /bin/systemctl start elasticsearch.service
+ else
+ echo "Unsupported architecture...can not start elasticsearch."
+ fi
+}
+
+function stop_elasticsearch {
+ if is_ubuntu; then
+ sudo /etc/init.d/elasticsearch stop
+ elif is_fedora; then
+ sudo /bin/systemctl stop elasticsearch.service
+ else
+ echo "Unsupported architecture...can not stop elasticsearch."
+ fi
+}
+
+function install_elasticsearch {
+ if is_package_installed elasticsearch; then
+ echo "Note: elasticsearch was already installed."
+ return
+ fi
+ if is_ubuntu; then
+ is_package_installed openjdk-7-jre-headless || install_package openjdk-7-jre-headless
+
+ sudo dpkg -i ${FILES}/elasticsearch-${ELASTICSEARCH_VERSION}.deb
+ sudo update-rc.d elasticsearch defaults 95 10
+ elif is_fedora; then
+ is_package_installed java-1.7.0-openjdk-headless || install_package java-1.7.0-openjdk-headless
+ yum_install ${FILES}/elasticsearch-${ELASTICSEARCH_VERSION}.noarch.rpm
+ sudo /bin/systemctl daemon-reload
+ sudo /bin/systemctl enable elasticsearch.service
+ else
+ echo "Unsupported install of elasticsearch on this architecture."
+ fi
+}
+
+function uninstall_elasticsearch {
+ if is_package_installed elasticsearch; then
+ if is_ubuntu; then
+ sudo apt-get purge elasticsearch
+ elif is_fedora; then
+ sudo yum remove elasticsearch
+ else
+ echo "Unsupported install of elasticsearch on this architecture."
+ fi
+ fi
+}
+
+# The PHASE dispatcher. All pkg files are expected to basically cargo
+# cult the case statement.
+PHASE=$1
+echo "Phase is $PHASE"
+
+case $PHASE in
+ download)
+ download_elasticsearch
+ ;;
+ install)
+ install_elasticsearch
+ ;;
+ configure)
+ configure_elasticsearch
+ ;;
+ start)
+ start_elasticsearch
+ ;;
+ stop)
+ stop_elasticsearch
+ ;;
+ uninstall)
+ uninstall_elasticsearch
+ ;;
+esac
diff --git a/stackrc b/stackrc
index 99748ce..ff82140 100644
--- a/stackrc
+++ b/stackrc
@@ -32,11 +32,15 @@
# ``disable_service`` functions in ``local.conf``.
# For example, to enable Swift add this to ``local.conf``:
# enable_service s-proxy s-object s-container s-account
-# In order to enable nova-networking add the following settings in
-# `` local.conf ``:
+# In order to enable Neutron (a single node setup) add the following
+# settings in ``local.conf``:
# [[local|localrc]]
-# disable_service q-svc q-agt q-dhcp q-l3 q-meta
-# enable_service n-net
+# disable_service n-net
+# enable_service q-svc
+# enable_service q-agt
+# enable_service q-dhcp
+# enable_service q-l3
+# enable_service q-meta
# # Optional, to enable tempest configuration as part of devstack
# enable_service tempest
function isset {
@@ -50,16 +54,14 @@
# this allows us to pass ENABLED_SERVICES
if ! isset ENABLED_SERVICES ; then
- # core compute (glance / keystone / nova)
- ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-xvnc,n-cauth
+ # core compute (glance / keystone / nova (+ nova-network))
+ ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,n-sch,n-xvnc,n-cauth
# cinder
ENABLED_SERVICES+=,c-sch,c-api,c-vol
# heat
ENABLED_SERVICES+=,h-eng,h-api,h-api-cfn,h-api-cw
# dashboard
ENABLED_SERVICES+=,horizon
- # neutron
- ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
# additional services
ENABLED_SERVICES+=,rabbit,tempest,mysql
fi
diff --git a/tests/test_ip.sh b/tests/test_ip.sh
index e9cbcca..add8d1a 100755
--- a/tests/test_ip.sh
+++ b/tests/test_ip.sh
@@ -8,9 +8,6 @@
# Import common functions
source $TOP/functions
-# Import configuration
-source $TOP/openrc
-
echo "Testing IP addr functions"
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
index 7e96bae..6e1b515 100755
--- a/tests/test_libs_from_pypi.sh
+++ b/tests/test_libs_from_pypi.sh
@@ -17,6 +17,8 @@
export TOP_DIR=$TOP
+# we don't actually care about the HOST_IP
+HOST_IP="don't care"
# Import common functions
source $TOP/functions
source $TOP/stackrc