Merge "Minor fixes for bashate trunk"
diff --git a/.gitignore b/.gitignore
index 8870bb3..a5a17c2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,6 +4,7 @@
*.log.[1-9]
*.pem
.localrc.auto
+.localrc.password
.prereqs
.tox
.stackenv
@@ -11,6 +12,8 @@
doc/files
doc/build
files/*.gz
+files/*.rpm
+files/*.rpm.*
files/*.qcow2
files/*.img
files/images
diff --git a/doc/source/guides/lxc.rst b/doc/source/guides/lxc.rst
new file mode 100644
index 0000000..a719d60
--- /dev/null
+++ b/doc/source/guides/lxc.rst
@@ -0,0 +1,164 @@
+================================
+All-In-One Single LXC Container
+================================
+
+This guide walks you through the process of deploying OpenStack using devstack
+in an LXC container instead of a VM.
+
+The primary benefits to running devstack inside a container instead of a VM is
+faster performance and lower memory overhead while still providing a suitable
+level of isolation. This can be particularly useful when you want to simulate
+running OpenStack on multiple nodes.
+
+.. Warning:: Containers do not provide the same level of isolation as a virtual
+ machine.
+
+.. Note:: Not all OpenStack features support running inside of a container. See
+ `Limitations`_ section below for details. :doc:`OpenStack in a VM <single-vm>`
+ is recommended for beginners.
+
+Prerequisites
+==============
+
+This guide is written for Ubuntu 14.04 but should be adaptable for any modern
+Linux distribution.
+
+Install the LXC package::
+
+ sudo apt-get install lxc
+
+You can verify support for containerization features in your currently running
+kernel using the ``lxc-checkconfig`` command.
+
+Container Setup
+===============
+
+Configuration
+---------------
+
+For a successful run of ``stack.sh`` and to permit use of KVM to run the VMs you
+launch inside your container, we need to use the following additional
+configuration options. Place the following in a file called
+``devstack-lxc.conf``::
+
+ # Permit access to /dev/loop*
+ lxc.cgroup.devices.allow = b 7:* rwm
+
+ # Setup access to /dev/net/tun and /dev/kvm
+ lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
+ lxc.mount.entry = /dev/kvm dev/kvm none bind,create=file 0 0
+
+ # Networking
+ lxc.network.type = veth
+ lxc.network.flags = up
+ lxc.network.link = lxcbr0
+
+
+Create Container
+-------------------
+
+The configuration and rootfs for LXC containers are created using the
+``lxc-create`` command.
+
+We will name our container ``devstack`` and use the ``ubuntu`` template which
+will use ``debootstrap`` to build a Ubuntu rootfs. It will default to the same
+release and architecture as the host system. We also install the additional
+packages ``bsdmainutils`` and ``git`` as we'll need them to run devstack::
+
+ sudo lxc-create -n devstack -t ubuntu -f devstack-lxc.conf -- --packages=bsdmainutils,git
+
+The first time it builds the rootfs will take a few minutes to download, unpack,
+and configure all the necessary packages for a minimal installation of Ubuntu.
+LXC will cache this and subsequent containers will only take seconds to create.
+
+.. Note:: To speed up the initial rootfs creation, you can specify a mirror to
+ download the Ubuntu packages from by appending ``--mirror=`` and then the URL
+ of a Ubuntu mirror. To see other other template options, you can run
+ ``lxc-create -t ubuntu -h``.
+
+Start Container
+----------------
+
+To start the container, run::
+
+ sudo lxc-start -n devstack
+
+A moment later you should be presented with the login prompt for your container.
+You can login using the username ``ubuntu`` and password ``ubuntu``.
+
+You can also ssh into your container. On your host, run
+``sudo lxc-info -n devstack`` to get the IP address (e.g.
+``ssh ubuntu@$(sudo lxc-info -n p2 | awk '/IP/ { print $2 }')``).
+
+Run Devstack
+-------------
+
+You should now be logged into your container and almost ready to run devstack.
+The commands in this section should all be run inside your container.
+
+.. Tip:: You can greatly reduce the runtime of your initial devstack setup by
+ ensuring you have your apt sources.list configured to use a fast mirror.
+ Check and update ``/etc/apt/sources.list`` if necessary and then run
+ ``apt-get update``.
+
+#. Download DevStack
+
+ ::
+
+ git clone https://git.openstack.org/openstack-dev/devstack
+
+#. Configure
+
+ Refer to :ref:`minimal-configuration` if you wish to configure the behaviour
+ of devstack.
+
+#. Start the install
+
+ ::
+
+ cd devstack
+ ./stack.sh
+
+Cleanup
+-------
+
+To stop the container::
+
+ lxc-stop -n devstack
+
+To delete the container::
+
+ lxc-destroy -n devstack
+
+Limitations
+============
+
+Not all OpenStack features may function correctly or at all when ran from within
+a container.
+
+Cinder
+-------
+
+Unable to create LVM backed volume
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ In our configuration, we have not whitelisted access to device-mapper or LVM
+ devices. Doing so will permit your container to have access and control of LVM
+ on the host system. To enable, add the following to your
+ ``devstack-lxc.conf`` before running ``lxc-create``::
+
+ lxc.cgroup.devices.allow = c 10:236 rwm
+ lxc.cgroup.devices.allow = b 252:* rwm
+
+ Additionally you'll need to set ``udev_rules = 0`` in the ``activation``
+ section of ``/etc/lvm/lvm.conf`` unless you mount devtmpfs in your container.
+
+Unable to attach volume to instance
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ It is not possible to attach cinder volumes to nova instances due to parts of
+ the Linux iSCSI implementation not being network namespace aware. This can be
+ worked around by using network pass-through instead of a separate network
+ namespace but such a setup significantly reduces the isolation of the
+ container (e.g. a ``halt`` command issued in the container will cause the host
+ system to shutdown).
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 4a1d93d..3e324ad 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -76,6 +76,7 @@
guides/single-vm
guides/single-machine
+ guides/lxc
guides/multinode-lab
guides/neutron
guides/devstack-with-nested-kvm
@@ -96,6 +97,13 @@
server-class machine or a laptop at home.
:doc:`[Read] <guides/single-machine>`
+All-In-One LXC Container
+-------------------------
+
+Run :doc:`OpenStack in a LXC container <guides/lxc>`. Beneficial for intermediate
+and advanced users. The VMs launched in this cloud will be fully accelerated but
+not all OpenStack features are supported. :doc:`[Read] <guides/lxc>`
+
Multi-Node Lab
--------------
diff --git a/exercises/boot_from_volume.sh b/exercises/boot_from_volume.sh
index d520b9b..5409859 100755
--- a/exercises/boot_from_volume.sh
+++ b/exercises/boot_from_volume.sh
@@ -64,7 +64,7 @@
# Launching a server
# ==================
-# List servers for tenant:
+# List servers for project:
nova list
# Images
diff --git a/exercises/client-args.sh b/exercises/client-args.sh
index 7cfef1c..07ce528 100755
--- a/exercises/client-args.sh
+++ b/exercises/client-args.sh
@@ -43,19 +43,19 @@
unset NOVA_USERNAME
# Save the known variables for later
-export x_TENANT_NAME=$OS_TENANT_NAME
+export x_PROJECT_NAME=$OS_PROJECT_NAME
export x_USERNAME=$OS_USERNAME
export x_PASSWORD=$OS_PASSWORD
export x_AUTH_URL=$OS_AUTH_URL
# Unset the usual variables to force argument processing
-unset OS_TENANT_NAME
+unset OS_PROJECT_NAME
unset OS_USERNAME
unset OS_PASSWORD
unset OS_AUTH_URL
# Common authentication args
-TENANT_ARG="--os-tenant-name=$x_TENANT_NAME"
+PROJECT_ARG="--os-project-name=$x_PROJECT_NAME"
ARGS="--os-username=$x_USERNAME --os-password=$x_PASSWORD --os-auth-url=$x_AUTH_URL"
# Set global return
@@ -68,7 +68,7 @@
STATUS_KEYSTONE="Skipped"
else
echo -e "\nTest Keystone"
- if openstack $TENANT_ARG $ARGS catalog show identity; then
+ if openstack $PROJECT_ARG $ARGS catalog show identity; then
STATUS_KEYSTONE="Succeeded"
else
STATUS_KEYSTONE="Failed"
@@ -87,7 +87,7 @@
else
# Test OSAPI
echo -e "\nTest Nova"
- if nova $TENANT_ARG $ARGS flavor-list; then
+ if nova $PROJECT_ARG $ARGS flavor-list; then
STATUS_NOVA="Succeeded"
else
STATUS_NOVA="Failed"
@@ -104,7 +104,7 @@
STATUS_CINDER="Skipped"
else
echo -e "\nTest Cinder"
- if cinder $TENANT_ARG $ARGS list; then
+ if cinder $PROJECT_ARG $ARGS list; then
STATUS_CINDER="Succeeded"
else
STATUS_CINDER="Failed"
@@ -121,7 +121,7 @@
STATUS_GLANCE="Skipped"
else
echo -e "\nTest Glance"
- if openstack $TENANT_ARG $ARGS image list; then
+ if openstack $PROJECT_ARG $ARGS image list; then
STATUS_GLANCE="Succeeded"
else
STATUS_GLANCE="Failed"
@@ -138,7 +138,7 @@
STATUS_SWIFT="Skipped"
else
echo -e "\nTest Swift"
- if swift $TENANT_ARG $ARGS stat; then
+ if swift $PROJECT_ARG $ARGS stat; then
STATUS_SWIFT="Succeeded"
else
STATUS_SWIFT="Failed"
diff --git a/exercises/neutron-adv-test.sh b/exercises/neutron-adv-test.sh
index 9bcb766..a3128a8 100755
--- a/exercises/neutron-adv-test.sh
+++ b/exercises/neutron-adv-test.sh
@@ -48,9 +48,9 @@
# Neutron Settings
# ----------------
-TENANTS="DEMO1"
+PROJECTS="DEMO1"
# TODO (nati)_Test public network
-#TENANTS="DEMO1,DEMO2"
+#PROJECTS="DEMO1,DEMO2"
PUBLIC_NAME="admin"
DEMO1_NAME="demo1"
@@ -91,34 +91,34 @@
# Various functions
# -----------------
-function foreach_tenant {
+function foreach_project {
COMMAND=$1
- for TENANT in ${TENANTS//,/ };do
- eval ${COMMAND//%TENANT%/$TENANT}
+ for PROJECT in ${PROJECTS//,/ };do
+ eval ${COMMAND//%PROJECT%/$PROJECT}
done
}
-function foreach_tenant_resource {
+function foreach_project_resource {
COMMAND=$1
RESOURCE=$2
- for TENANT in ${TENANTS//,/ };do
- eval 'NUM=$'"${TENANT}_NUM_$RESOURCE"
+ for PROJECT in ${PROJECTS//,/ };do
+ eval 'NUM=$'"${PROJECT}_NUM_$RESOURCE"
for i in `seq $NUM`;do
- local COMMAND_LOCAL=${COMMAND//%TENANT%/$TENANT}
+ local COMMAND_LOCAL=${COMMAND//%PROJECT%/$PROJECT}
COMMAND_LOCAL=${COMMAND_LOCAL//%NUM%/$i}
eval $COMMAND_LOCAL
done
done
}
-function foreach_tenant_vm {
+function foreach_project_vm {
COMMAND=$1
- foreach_tenant_resource "$COMMAND" 'VM'
+ foreach_project_resource "$COMMAND" 'VM'
}
-function foreach_tenant_net {
+function foreach_project_net {
COMMAND=$1
- foreach_tenant_resource "$COMMAND" 'NET'
+ foreach_project_resource "$COMMAND" 'NET'
}
function get_image_id {
@@ -128,12 +128,12 @@
echo "$IMAGE_ID"
}
-function get_tenant_id {
- local TENANT_NAME=$1
- local TENANT_ID
- TENANT_ID=`openstack project list | grep " $TENANT_NAME " | head -n 1 | get_field 1`
- die_if_not_set $LINENO TENANT_ID "Failure retrieving TENANT_ID for $TENANT_NAME"
- echo "$TENANT_ID"
+function get_project_id {
+ local PROJECT_NAME=$1
+ local PROJECT_ID
+ PROJECT_ID=`openstack project list | grep " $PROJECT_NAME " | head -n 1 | get_field 1`
+ die_if_not_set $LINENO PROJECT_ID "Failure retrieving PROJECT_ID for $PROJECT_NAME"
+ echo "$PROJECT_ID"
}
function get_user_id {
@@ -177,23 +177,23 @@
function neutron_debug_admin {
local os_username=$OS_USERNAME
- local os_tenant_id=$OS_TENANT_ID
+ local os_project_id=$OS_PROJECT_ID
source $TOP_DIR/openrc admin admin
neutron-debug $@
- source $TOP_DIR/openrc $os_username $os_tenant_id
+ source $TOP_DIR/openrc $os_username $os_project_id
}
-function add_tenant {
+function add_project {
openstack project create $1
openstack user create $2 --password ${ADMIN_PASSWORD} --project $1
openstack role add Member --project $1 --user $2
}
-function remove_tenant {
- local TENANT=$1
- local TENANT_ID
- TENANT_ID=$(get_tenant_id $TENANT)
- openstack project delete $TENANT_ID
+function remove_project {
+ local PROJECT=$1
+ local PROJECT_ID
+ PROJECT_ID=$(get_project_id $PROJECT)
+ openstack project delete $PROJECT_ID
}
function remove_user {
@@ -203,47 +203,47 @@
openstack user delete $USER_ID
}
-function create_tenants {
+function create_projects {
source $TOP_DIR/openrc admin admin
- add_tenant demo1 demo1 demo1
- add_tenant demo2 demo2 demo2
+ add_project demo1 demo1 demo1
+ add_project demo2 demo2 demo2
source $TOP_DIR/openrc demo demo
}
-function delete_tenants_and_users {
+function delete_projects_and_users {
source $TOP_DIR/openrc admin admin
remove_user demo1
- remove_tenant demo1
+ remove_project demo1
remove_user demo2
- remove_tenant demo2
- echo "removed all tenants"
+ remove_project demo2
+ echo "removed all projects"
source $TOP_DIR/openrc demo demo
}
function create_network {
- local TENANT=$1
+ local PROJECT=$1
local GATEWAY=$2
local CIDR=$3
local NUM=$4
local EXTRA=$5
- local NET_NAME="${TENANT}-net$NUM"
- local ROUTER_NAME="${TENANT}-router${NUM}"
+ local NET_NAME="${PROJECT}-net$NUM"
+ local ROUTER_NAME="${PROJECT}-router${NUM}"
source $TOP_DIR/openrc admin admin
- local TENANT_ID
- TENANT_ID=$(get_tenant_id $TENANT)
- source $TOP_DIR/openrc $TENANT $TENANT
+ local PROJECT_ID
+ PROJECT_ID=$(get_project_id $PROJECT)
+ source $TOP_DIR/openrc $PROJECT $PROJECT
local NET_ID
- NET_ID=$(neutron net-create --tenant-id $TENANT_ID $NET_NAME $EXTRA| grep ' id ' | awk '{print $4}' )
- die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $TENANT_ID $NET_NAME $EXTRA"
- neutron subnet-create --ip-version 4 --tenant-id $TENANT_ID --gateway $GATEWAY --subnetpool None $NET_ID $CIDR
+ NET_ID=$(neutron net-create --project-id $PROJECT_ID $NET_NAME $EXTRA| grep ' id ' | awk '{print $4}' )
+ die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PROJECT_ID $NET_NAME $EXTRA"
+ neutron subnet-create --ip-version 4 --project-id $PROJECT_ID --gateway $GATEWAY --subnetpool None $NET_ID $CIDR
neutron_debug_admin probe-create --device-owner compute $NET_ID
source $TOP_DIR/openrc demo demo
}
function create_networks {
- foreach_tenant_net 'create_network ${%TENANT%_NAME} ${%TENANT%_NET%NUM%_GATEWAY} ${%TENANT%_NET%NUM%_CIDR} %NUM% ${%TENANT%_NET%NUM%_EXTRA}'
+ foreach_project_net 'create_network ${%PROJECT%_NAME} ${%PROJECT%_NET%NUM%_GATEWAY} ${%PROJECT%_NET%NUM%_CIDR} %NUM% ${%PROJECT%_NET%NUM%_EXTRA}'
#TODO(nati) test security group function
- # allow ICMP for both tenant's security groups
+ # allow ICMP for both project's security groups
#source $TOP_DIR/openrc demo1 demo1
#$NOVA secgroup-add-rule default icmp -1 -1 0.0.0.0/0
#source $TOP_DIR/openrc demo2 demo2
@@ -251,10 +251,10 @@
}
function create_vm {
- local TENANT=$1
+ local PROJECT=$1
local NUM=$2
local NET_NAMES=$3
- source $TOP_DIR/openrc $TENANT $TENANT
+ source $TOP_DIR/openrc $PROJECT $PROJECT
local NIC=""
for NET_NAME in ${NET_NAMES//,/ };do
NIC="$NIC --nic net-id="`get_network_id $NET_NAME`
@@ -265,13 +265,13 @@
VM_UUID=`nova boot --flavor $(get_flavor_id m1.tiny) \
--image $(get_image_id) \
$NIC \
- $TENANT-server$NUM | grep ' id ' | cut -d"|" -f3 | sed 's/ //g'`
- die_if_not_set $LINENO VM_UUID "Failure launching $TENANT-server$NUM"
+ $PROJECT-server$NUM | grep ' id ' | cut -d"|" -f3 | sed 's/ //g'`
+ die_if_not_set $LINENO VM_UUID "Failure launching $PROJECT-server$NUM"
confirm_server_active $VM_UUID
}
function create_vms {
- foreach_tenant_vm 'create_vm ${%TENANT%_NAME} %NUM% ${%TENANT%_VM%NUM%_NET}'
+ foreach_project_vm 'create_vm ${%PROJECT%_NAME} %NUM% ${%PROJECT%_VM%NUM%_NET}'
}
function ping_ip {
@@ -284,11 +284,11 @@
}
function check_vm {
- local TENANT=$1
+ local PROJECT=$1
local NUM=$2
- local VM_NAME="$TENANT-server$NUM"
+ local VM_NAME="$PROJECT-server$NUM"
local NET_NAME=$3
- source $TOP_DIR/openrc $TENANT $TENANT
+ source $TOP_DIR/openrc $PROJECT $PROJECT
ping_ip $VM_NAME $NET_NAME
# TODO (nati) test ssh connection
# TODO (nati) test inter connection between vm
@@ -297,31 +297,31 @@
}
function check_vms {
- foreach_tenant_vm 'check_vm ${%TENANT%_NAME} %NUM% ${%TENANT%_VM%NUM%_NET}'
+ foreach_project_vm 'check_vm ${%PROJECT%_NAME} %NUM% ${%PROJECT%_VM%NUM%_NET}'
}
function shutdown_vm {
- local TENANT=$1
+ local PROJECT=$1
local NUM=$2
- source $TOP_DIR/openrc $TENANT $TENANT
- VM_NAME=${TENANT}-server$NUM
+ source $TOP_DIR/openrc $PROJECT $PROJECT
+ VM_NAME=${PROJECT}-server$NUM
nova delete $VM_NAME
}
function shutdown_vms {
- foreach_tenant_vm 'shutdown_vm ${%TENANT%_NAME} %NUM%'
+ foreach_project_vm 'shutdown_vm ${%PROJECT%_NAME} %NUM%'
if ! timeout $TERMINATE_TIMEOUT sh -c "while nova list | grep -q ACTIVE; do sleep 1; done"; then
die $LINENO "Some VMs failed to shutdown"
fi
}
function delete_network {
- local TENANT=$1
+ local PROJECT=$1
local NUM=$2
- local NET_NAME="${TENANT}-net$NUM"
+ local NET_NAME="${PROJECT}-net$NUM"
source $TOP_DIR/openrc admin admin
- local TENANT_ID
- TENANT_ID=$(get_tenant_id $TENANT)
+ local PROJECT_ID
+ PROJECT_ID=$(get_project_id $PROJECT)
#TODO(nati) comment out until l3-agent merged
#for res in port subnet net router;do
for net_id in `neutron net-list -c id -c name | grep $NET_NAME | awk '{print $2}'`;do
@@ -333,7 +333,7 @@
}
function delete_networks {
- foreach_tenant_net 'delete_network ${%TENANT%_NAME} %NUM%'
+ foreach_project_net 'delete_network ${%PROJECT%_NAME} %NUM%'
# TODO(nati) add secuirty group check after it is implemented
# source $TOP_DIR/openrc demo1 demo1
# nova secgroup-delete-rule default icmp -1 -1 0.0.0.0/0
@@ -342,7 +342,7 @@
}
function create_all {
- create_tenants
+ create_projects
create_networks
create_vms
}
@@ -350,7 +350,7 @@
function delete_all {
shutdown_vms
delete_networks
- delete_tenants_and_users
+ delete_projects_and_users
}
function all {
@@ -366,8 +366,8 @@
IMAGE=$(get_image_id)
echo $IMAGE
- TENANT_ID=$(get_tenant_id demo)
- echo $TENANT_ID
+ PROJECT_ID=$(get_project_id demo)
+ echo $PROJECT_ID
FLAVOR_ID=$(get_flavor_id m1.tiny)
echo $FLAVOR_ID
@@ -382,11 +382,11 @@
function usage {
echo "$0: [-h]"
echo " -h, --help Display help message"
- echo " -t, --tenant Create tenants"
+ echo " -t, --project Create projects"
echo " -n, --net Create networks"
echo " -v, --vm Create vms"
echo " -c, --check Check connection"
- echo " -x, --delete-tenants Delete tenants"
+ echo " -x, --delete-projects Delete projects"
echo " -y, --delete-nets Delete networks"
echo " -z, --delete-vms Delete vms"
echo " -T, --test Test functions"
@@ -412,7 +412,7 @@
-v | --vm ) create_vms
exit
;;
- -t | --tenant ) create_tenants
+ -t | --project ) create_projects
exit
;;
-c | --check ) check_vms
@@ -421,7 +421,7 @@
-T | --test ) test_functions
exit
;;
- -x | --delete-tenants ) delete_tenants_and_users
+ -x | --delete-projects ) delete_projects_and_users
exit
;;
-y | --delete-nets ) delete_networks
diff --git a/files/rpms/general b/files/rpms/general
index e0ef54c..a0906e2 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -26,6 +26,7 @@
psmisc
pyOpenSSL # version in pip uses too much memory
python-devel
+redhat-rpm-config # missing dep for gcc hardening flags, see rhbz#1217376
screen
tar
tcpdump
diff --git a/functions b/functions
index 42c5c1d..5730b6c 100644
--- a/functions
+++ b/functions
@@ -304,8 +304,8 @@
*) echo "Do not know what to do with $image_fname"; false;;
esac
- if is_arch "ppc64"; then
- img_property="--property hw_cdrom_bus=scsi"
+ if is_arch "ppc64le" || is_arch "ppc64" || is_arch "ppc"; then
+ img_property="--property hw_disk_bus=scsi --property hw_scsi_model=virtio-scsi --property hw_cdrom_bus=scsi --property os_command_line=console=hvc0"
fi
if is_arch "aarch64"; then
diff --git a/functions-common b/functions-common
index f84301f..a26cc50 100644
--- a/functions-common
+++ b/functions-common
@@ -365,8 +365,9 @@
function GetDistro {
GetOSVersion
- if [[ "$os_VENDOR" =~ (Ubuntu) || "$os_VENDOR" =~ (Debian) ]]; then
- # 'Everyone' refers to Ubuntu / Debian releases by
+ if [[ "$os_VENDOR" =~ (Ubuntu) || "$os_VENDOR" =~ (Debian) || \
+ "$os_VENDOR" =~ (LinuxMint) ]]; then
+ # 'Everyone' refers to Ubuntu / Debian / Mint releases by
# the code name adjective
DISTRO=$os_CODENAME
elif [[ "$os_VENDOR" =~ (Fedora) ]]; then
@@ -993,7 +994,7 @@
# out of tree, as it is used by nova and neutron.
# figure out a way to refactor nova/neutron code to eliminate this
function is_ironic_hardware {
- is_service_enabled ironic && [[ -n "${IRONIC_DEPLOY_DRIVER##*_ssh}" ]] && return 0
+ is_service_enabled ironic && [[ "$IRONIC_IS_HARDWARE" == "True" ]] && return 0
return 1
}
@@ -1095,8 +1096,9 @@
continue
fi
- # Assume we want this package
- package=${line%#*}
+ # Assume we want this package; free-form
+ # comments allowed after a #
+ package=${line%%#*}
inst_pkg=1
# Look for # dist:xxx in comment
@@ -1428,14 +1430,17 @@
local service=$1
local command="$2"
local group=$3
+ local subservice=$4
+
+ local name=${subservice:-$service}
time_start "run_process"
if is_service_enabled $service; then
if [[ "$USE_SCREEN" = "True" ]]; then
- screen_process "$service" "$command" "$group"
+ screen_process "$name" "$command" "$group"
else
# Spawn directly without screen
- _run_process "$service" "$command" "$group" &
+ _run_process "$name" "$command" "$group" &
fi
fi
time_stop "run_process"
diff --git a/lib/cinder b/lib/cinder
index e1e1f2a..6401f2d 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -351,7 +351,7 @@
# Set os_privileged_user credentials (used for os-assisted-snapshots)
iniset $CINDER_CONF DEFAULT os_privileged_user_name nova
iniset $CINDER_CONF DEFAULT os_privileged_user_password "$SERVICE_PASSWORD"
- iniset $CINDER_CONF DEFAULT os_privileged_user_tenant "$SERVICE_TENANT_NAME"
+ iniset $CINDER_CONF DEFAULT os_privileged_user_tenant "$SERVICE_PROJECT_NAME"
iniset $CINDER_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
}
diff --git a/lib/databases/postgresql b/lib/databases/postgresql
index 204c257..852bac4 100644
--- a/lib/databases/postgresql
+++ b/lib/databases/postgresql
@@ -102,7 +102,7 @@
elif is_fedora || is_suse; then
install_package postgresql-server
if is_fedora; then
- sudo systemctl enable postgresql-server
+ sudo systemctl enable postgresql
fi
else
exit_distro_not_supported "postgresql installation"
diff --git a/lib/glance b/lib/glance
index c248611..4df2310 100644
--- a/lib/glance
+++ b/lib/glance
@@ -143,7 +143,7 @@
iniset $GLANCE_API_CONF glance_store stores "file, http, swift"
iniset $GLANCE_API_CONF DEFAULT graceful_shutdown_timeout "$SERVICE_GRACEFUL_SHUTDOWN_TIMEOUT"
- iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_TENANT_NAME:glance-swift
+ iniset $GLANCE_SWIFT_STORE_CONF ref1 user $SERVICE_PROJECT_NAME:glance-swift
iniset $GLANCE_SWIFT_STORE_CONF ref1 key $SERVICE_PASSWORD
iniset $GLANCE_SWIFT_STORE_CONF ref1 auth_address $KEYSTONE_SERVICE_URI/v3
iniset $GLANCE_SWIFT_STORE_CONF ref1 user_domain_id default
@@ -198,7 +198,7 @@
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_url
iniset $GLANCE_CACHE_CONF DEFAULT auth_url $KEYSTONE_AUTH_URI/v2.0
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_tenant_name
- iniset $GLANCE_CACHE_CONF DEFAULT admin_tenant_name $SERVICE_TENANT_NAME
+ iniset $GLANCE_CACHE_CONF DEFAULT admin_tenant_name $SERVICE_PROJECT_NAME
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_user
iniset $GLANCE_CACHE_CONF DEFAULT admin_user glance
iniuncomment $GLANCE_CACHE_CONF DEFAULT auth_password
@@ -226,9 +226,9 @@
# Project User Roles
# ---------------------------------------------------------------------
-# SERVICE_TENANT_NAME glance service
-# SERVICE_TENANT_NAME glance-swift ResellerAdmin (if Swift is enabled)
-# SERVICE_TENANT_NAME glance-search search (if Search is enabled)
+# SERVICE_PROJECT_NAME glance service
+# SERVICE_PROJECT_NAME glance-swift ResellerAdmin (if Swift is enabled)
+# SERVICE_PROJECT_NAME glance-search search (if Search is enabled)
function create_glance_accounts {
if is_service_enabled g-api; then
@@ -241,7 +241,7 @@
local glance_swift_user
glance_swift_user=$(get_or_create_user "glance-swift" \
"$SERVICE_PASSWORD" "default" "glance-swift@example.com")
- get_or_add_user_project_role "ResellerAdmin" $glance_swift_user $SERVICE_TENANT_NAME
+ get_or_add_user_project_role "ResellerAdmin" $glance_swift_user $SERVICE_PROJECT_NAME
fi
get_or_create_service "glance" "image" "Glance Image Service"
diff --git a/lib/heat b/lib/heat
index 1bb753d..4131878 100644
--- a/lib/heat
+++ b/lib/heat
@@ -196,6 +196,9 @@
iniset $HEAT_CONF DEFAULT enable_stack_abandon true
fi
+ iniset $HEAT_CONF cache enabled "True"
+ iniset $HEAT_CONF cache backend "dogpile.cache.memory"
+
sudo install -d -o $STACK_USER $HEAT_ENV_DIR $HEAT_TEMPLATES_DIR
# copy the default environment
diff --git a/lib/keystone b/lib/keystone
index 3c67693..46d691c 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -56,8 +56,23 @@
KEYSTONE_CATALOG_BACKEND="sql"
# Toggle for deploying Keystone under HTTPD + mod_wsgi
+# Deprecated in Mitaka, use KEYSTONE_DEPLOY instead.
KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
+# KEYSTONE_DEPLOY defines how keystone is deployed, allowed values:
+# - mod_wsgi : Run keystone under Apache HTTPd mod_wsgi
+# - eventlet : Run keystone-all
+# - uwsgi : Run keystone under uwsgi
+if [ -z "$KEYSTONE_DEPLOY" ]; then
+ if [ -z "$KEYSTONE_USE_MOD_WSGI" ]; then
+ KEYSTONE_DEPLOY=mod_wsgi
+ elif [ "$KEYSTONE_USE_MOD_WSGI" == True ]; then
+ KEYSTONE_DEPLOY=mod_wsgi
+ else
+ KEYSTONE_DEPLOY=eventlet
+ fi
+fi
+
# Select the token persistence backend driver
KEYSTONE_TOKEN_BACKEND=${KEYSTONE_TOKEN_BACKEND:-sql}
@@ -94,6 +109,7 @@
KEYSTONE_ADMIN_BIND_HOST=${KEYSTONE_ADMIN_BIND_HOST:-$KEYSTONE_SERVICE_HOST}
# Set the tenant for service accounts in Keystone
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
+SERVICE_PROJECT_NAME=${SERVICE_TENANT_NAME:-service}
# if we are running with SSL use https protocols
if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
@@ -230,16 +246,15 @@
# Register SSL certificates if provided
if is_ssl_enabled_service key; then
ensure_certificates KEYSTONE
-
- iniset $KEYSTONE_CONF eventlet_server_ssl enable True
- iniset $KEYSTONE_CONF eventlet_server_ssl certfile $KEYSTONE_SSL_CERT
- iniset $KEYSTONE_CONF eventlet_server_ssl keyfile $KEYSTONE_SSL_KEY
fi
+ local service_port=$KEYSTONE_SERVICE_PORT
+ local auth_port=$KEYSTONE_AUTH_PORT
+
if is_service_enabled tls-proxy; then
# Set the service ports for a proxy to take the originals
- iniset $KEYSTONE_CONF eventlet_server public_port $KEYSTONE_SERVICE_PORT_INT
- iniset $KEYSTONE_CONF eventlet_server admin_port $KEYSTONE_AUTH_PORT_INT
+ service_port=$KEYSTONE_SERVICE_PORT_INT
+ auth_port=$KEYSTONE_AUTH_PORT_INT
iniset $KEYSTONE_CONF DEFAULT public_endpoint $KEYSTONE_SERVICE_URI
iniset $KEYSTONE_CONF DEFAULT admin_endpoint $KEYSTONE_AUTH_URI
@@ -259,19 +274,68 @@
fi
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$KEYSTONE_USE_MOD_WSGI" == "False" ] ; then
+ if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$KEYSTONE_DEPLOY" != "mod_wsgi" ] ; then
setup_colorized_logging $KEYSTONE_CONF DEFAULT
fi
iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
- if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
iniset $KEYSTONE_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
iniset $KEYSTONE_CONF DEFAULT logging_default_format_string "%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s"
iniset $KEYSTONE_CONF DEFAULT logging_debug_format_suffix "%(asctime)s.%(msecs)03d %(funcName)s %(pathname)s:%(lineno)d"
iniset $KEYSTONE_CONF DEFAULT logging_exception_prefix "%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s"
_config_keystone_apache_wsgi
- else
+ elif [ "$KEYSTONE_DEPLOY" == "uwsgi" ]; then
+ # iniset creates these files when it's called if they don't exist.
+ KEYSTONE_PUBLIC_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-public.ini
+ KEYSTONE_ADMIN_UWSGI_FILE=$KEYSTONE_CONF_DIR/keystone-uwsgi-admin.ini
+
+ rm -f "$KEYSTONE_PUBLIC_UWSGI_FILE"
+ rm -f "$KEYSTONE_ADMIN_UWSGI_FILE"
+
+ if is_ssl_enabled_service key; then
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi https $KEYSTONE_SERVICE_HOST:$service_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi https $KEYSTONE_ADMIN_BIND_HOST:$auth_port,$KEYSTONE_SSL_CERT,$KEYSTONE_SSL_KEY
+ else
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi http $KEYSTONE_SERVICE_HOST:$service_port
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi http $KEYSTONE_ADMIN_BIND_HOST:$auth_port
+ fi
+
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-public"
+ # This is running standalone
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi threads $(nproc)
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi enable-threads true
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi plugins python
+ # uwsgi recommends this to prevent thundering herd on accept.
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi thunder-lock true
+ # Override the default size for headers from the 4k default.
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi buffer-size 65535
+ # Make sure the client doesn't try to re-use the connection.
+ iniset "$KEYSTONE_PUBLIC_UWSGI_FILE" uwsgi add-header "Connection: close"
+
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi wsgi-file "$KEYSTONE_BIN_DIR/keystone-wsgi-admin"
+ # This is running standalone
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi threads $API_WORKERS
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi enable-threads true
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi plugins python
+ # uwsgi recommends this to prevent thundering herd on accept.
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi thunder-lock true
+ # Override the default size for headers from the 4k default.
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi buffer-size 65535
+ # Make sure the client doesn't try to re-use the connection.
+ iniset "$KEYSTONE_ADMIN_UWSGI_FILE" uwsgi add-header "Connection: close"
+
+ else # eventlet
+ if is_ssl_enabled_service key; then
+ iniset $KEYSTONE_CONF eventlet_server_ssl enable True
+ iniset $KEYSTONE_CONF eventlet_server_ssl certfile $KEYSTONE_SSL_CERT
+ iniset $KEYSTONE_CONF eventlet_server_ssl keyfile $KEYSTONE_SSL_KEY
+ fi
+
+ iniset $KEYSTONE_CONF eventlet_server public_port $service_port
+ iniset $KEYSTONE_CONF eventlet_server admin_port $auth_port
+
iniset $KEYSTONE_CONF eventlet_server admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
iniset $KEYSTONE_CONF eventlet_server admin_workers "$API_WORKERS"
# Public workers will use the server default, typically number of CPU.
@@ -319,7 +383,7 @@
get_or_add_user_domain_role $admin_role $admin_user default
# Create service project/role
- get_or_create_project "$SERVICE_TENANT_NAME" default
+ get_or_create_project "$SERVICE_PROJECT_NAME" default
# Service role, so service users do not have to be admins
get_or_create_role service
@@ -393,7 +457,7 @@
local user
user=$(get_or_create_user "$1" "$SERVICE_PASSWORD" default)
- get_or_add_user_project_role "$role" "$user" "$SERVICE_TENANT_NAME"
+ get_or_add_user_project_role "$role" "$user" "$SERVICE_PROJECT_NAME"
}
# Configure the service to use the auth token middleware.
@@ -414,7 +478,7 @@
iniset $conf_file $section username $admin_user
iniset $conf_file $section password $SERVICE_PASSWORD
iniset $conf_file $section user_domain_id default
- iniset $conf_file $section project_name $SERVICE_TENANT_NAME
+ iniset $conf_file $section project_name $SERVICE_PROJECT_NAME
iniset $conf_file $section project_domain_id default
iniset $conf_file $section auth_uri $KEYSTONE_SERVICE_URI
@@ -493,11 +557,13 @@
setup_develop $KEYSTONE_DIR ldap
fi
- if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
install_apache_wsgi
if is_ssl_enabled_service "key"; then
enable_mod_ssl
fi
+ elif [ "$KEYSTONE_DEPLOY" == "uwsgi" ]; then
+ pip_install uwsgi
fi
}
@@ -511,12 +577,15 @@
auth_protocol="http"
fi
- if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
enable_apache_site keystone
restart_apache_server
tail_log key /var/log/$APACHE_NAME/keystone.log
tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
- else
+ elif [ "$KEYSTONE_DEPLOY" == "uwsgi" ]; then
+ run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_PUBLIC_UWSGI_FILE" "" "key-p"
+ run_process key "$KEYSTONE_BIN_DIR/uwsgi $KEYSTONE_ADMIN_UWSGI_FILE" "" "key-a"
+ else # eventlet
# Start Keystone in a screen window
run_process key "$KEYSTONE_BIN_DIR/keystone-all --config-file $KEYSTONE_CONF"
fi
@@ -541,7 +610,7 @@
# stop_keystone() - Stop running processes
function stop_keystone {
- if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
+ if [ "$KEYSTONE_DEPLOY" == "mod_wsgi" ]; then
disable_apache_site keystone
restart_apache_server
fi
diff --git a/lib/neutron-legacy b/lib/neutron-legacy
index 7975570..7d6e881 100644
--- a/lib/neutron-legacy
+++ b/lib/neutron-legacy
@@ -73,6 +73,16 @@
PRIVATE_SUBNET_NAME=${PRIVATE_SUBNET_NAME:-"private-subnet"}
PUBLIC_SUBNET_NAME=${PUBLIC_SUBNET_NAME:-"public-subnet"}
+# Subnetpool defaults
+SUBNETPOOL_NAME=${SUBNETPOOL_NAME:-"shared-default-subnetpool"}
+
+SUBNETPOOL_PREFIX_V4=${SUBNETPOOL_PREFIX_V4:-10.0.0.0/8}
+SUBNETPOOL_PREFIX_V6=${SUBNETPOOL_PREFIX_V6:-2001:db8:8000::/48}
+
+SUBNETPOOL_SIZE_V4=${SUBNETPOOL_SIZE_V4:-24}
+SUBNETPOOL_SIZE_V6=${SUBNETPOOL_SIZE_V6:-64}
+
+
if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
Q_PROTOCOL="https"
fi
@@ -478,12 +488,12 @@
function create_nova_conf_neutron {
iniset $NOVA_CONF DEFAULT network_api_class "nova.network.neutronv2.api.API"
- iniset $NOVA_CONF neutron auth_plugin "v3password"
+ iniset $NOVA_CONF neutron auth_type "password"
iniset $NOVA_CONF neutron auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v3"
iniset $NOVA_CONF neutron username "$Q_ADMIN_USERNAME"
iniset $NOVA_CONF neutron password "$SERVICE_PASSWORD"
iniset $NOVA_CONF neutron user_domain_name "Default"
- iniset $NOVA_CONF neutron project_name "$SERVICE_TENANT_NAME"
+ iniset $NOVA_CONF neutron project_name "$SERVICE_PROJECT_NAME"
iniset $NOVA_CONF neutron project_domain_name "Default"
iniset $NOVA_CONF neutron auth_strategy "$Q_AUTH_STRATEGY"
iniset $NOVA_CONF neutron region_name "$REGION_NAME"
@@ -551,12 +561,12 @@
die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PHYSICAL_NETWORK $TENANT_ID"
if [[ "$IP_VERSION" =~ 4.* ]]; then
- SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME --gateway $NETWORK_GATEWAY --subnetpool None $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
+ SUBNET_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 ${ALLOCATION_POOL:+--allocation-pool $ALLOCATION_POOL} --name $PROVIDER_SUBNET_NAME --gateway $NETWORK_GATEWAY $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
die_if_not_set $LINENO SUBNET_ID "Failure creating SUBNET_ID for $PROVIDER_SUBNET_NAME $TENANT_ID"
fi
if [[ "$IP_VERSION" =~ .*6 ]] && [[ -n "$IPV6_PROVIDER_FIXED_RANGE" ]] && [[ -n "$IPV6_PROVIDER_NETWORK_GATEWAY" ]]; then
- SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $IPV6_PROVIDER_NETWORK_GATEWAY --name $IPV6_PROVIDER_SUBNET_NAME --subnetpool None $NET_ID $IPV6_PROVIDER_FIXED_RANGE | grep 'id' | get_field 2)
+ SUBNET_V6_ID=$(neutron subnet-create --tenant_id $TENANT_ID --ip_version 6 --ipv6-address-mode $IPV6_ADDRESS_MODE --gateway $IPV6_PROVIDER_NETWORK_GATEWAY --name $IPV6_PROVIDER_SUBNET_NAME $NET_ID $IPV6_PROVIDER_FIXED_RANGE | grep 'id' | get_field 2)
die_if_not_set $LINENO SUBNET_V6_ID "Failure creating SUBNET_V6_ID for $IPV6_PROVIDER_SUBNET_NAME $TENANT_ID"
fi
@@ -580,6 +590,8 @@
fi
fi
+ AUTO_ALLOCATE_EXT=$(neutron ext-list | grep 'auto-allocated-topology' | get_field 1)
+ SUBNETPOOL_EXT=$(neutron ext-list | grep 'subnet_allocation' | get_field 1)
if [[ "$Q_L3_ENABLED" == "True" ]]; then
# Create a router, and add the private subnet as one of its interfaces
if [[ "$Q_L3_ROUTER_PER_TENANT" == "True" ]]; then
@@ -592,11 +604,23 @@
die_if_not_set $LINENO ROUTER_ID "Failure creating ROUTER_ID for $Q_ROUTER_NAME"
fi
+ # if the extension is available, then mark the external
+ # network as default, and provision default subnetpools
+ EXTERNAL_NETWORK_FLAGS="--router:external"
+ if [[ -n $AUTO_ALLOCATE_EXT && -n $SUBNETPOOL_EXT ]]; then
+ EXTERNAL_NETWORK_FLAGS="$EXTERNAL_NETWORK_FLAGS --is-default"
+ if [[ "$IP_VERSION" =~ 4.* ]]; then
+ SUBNETPOOL_V4_ID=$(neutron subnetpool-create $SUBNETPOOL_NAME --default-prefixlen $SUBNETPOOL_SIZE_V4 --pool-prefix $SUBNETPOOL_PREFIX_V4 --shared --is-default=True | grep ' id ' | get_field 2)
+ fi
+ if [[ "$IP_VERSION" =~ .*6 ]]; then
+ SUBNETPOOL_V6_ID=$(neutron subnetpool-create $SUBNETPOOL_NAME --default-prefixlen $SUBNETPOOL_SIZE_V6 --pool-prefix $SUBNETPOOL_PREFIX_V6 --shared --is-default=True | grep ' id ' | get_field 2)
+ fi
+ fi
# Create an external network, and a subnet. Configure the external network as router gw
if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" = "True" ]; then
- EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True --provider:network_type=flat --provider:physical_network=${PUBLIC_PHYSICAL_NETWORK} | grep ' id ' | get_field 2)
+ EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- $EXTERNAL_NETWORK_FLAGS --provider:network_type=flat --provider:physical_network=${PUBLIC_PHYSICAL_NETWORK} | grep ' id ' | get_field 2)
else
- EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True | grep ' id ' | get_field 2)
+ EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- $EXTERNAL_NETWORK_FLAGS | grep ' id ' | get_field 2)
fi
die_if_not_set $LINENO EXT_NET_ID "Failure creating EXT_NET_ID for $PUBLIC_NETWORK_NAME"
@@ -775,7 +799,11 @@
fi
stop_process q-svc
- stop_process q-l3
+
+ if is_service_enabled q-l3; then
+ sudo pkill -f "radvd -C $DATA_DIR/neutron/ra"
+ stop_process q-l3
+ fi
if is_service_enabled q-meta; then
sudo pkill -9 -f neutron-ns-metadata-proxy || :
@@ -1163,12 +1191,12 @@
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes $Q_NOTIFY_NOVA_PORT_STATUS_CHANGES
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes $Q_NOTIFY_NOVA_PORT_DATA_CHANGES
- iniset $NEUTRON_CONF nova auth_plugin password
+ iniset $NEUTRON_CONF nova auth_type password
iniset $NEUTRON_CONF nova auth_url $KEYSTONE_AUTH_URI
iniset $NEUTRON_CONF nova username nova
iniset $NEUTRON_CONF nova password $SERVICE_PASSWORD
iniset $NEUTRON_CONF nova user_domain_id default
- iniset $NEUTRON_CONF nova project_name $SERVICE_TENANT_NAME
+ iniset $NEUTRON_CONF nova project_name $SERVICE_PROJECT_NAME
iniset $NEUTRON_CONF nova project_domain_id default
iniset $NEUTRON_CONF nova region_name $REGION_NAME
@@ -1265,7 +1293,6 @@
subnet_params+="--ip_version 4 "
subnet_params+="--gateway $NETWORK_GATEWAY "
subnet_params+="--name $PRIVATE_SUBNET_NAME "
- subnet_params+="--subnetpool None "
subnet_params+="$NET_ID $FIXED_RANGE"
local subnet_id
subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
@@ -1282,7 +1309,6 @@
subnet_params+="--ip_version 6 "
subnet_params+="--gateway $IPV6_PRIVATE_NETWORK_GATEWAY "
subnet_params+="--name $IPV6_PRIVATE_SUBNET_NAME "
- subnet_params+="--subnetpool None "
subnet_params+="$NET_ID $FIXED_RANGE_V6 $ipv6_modes"
local ipv6_subnet_id
ipv6_subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
@@ -1296,7 +1322,6 @@
subnet_params+="${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} "
subnet_params+="--gateway $PUBLIC_NETWORK_GATEWAY "
subnet_params+="--name $PUBLIC_SUBNET_NAME "
- subnet_params+="--subnetpool None "
subnet_params+="$EXT_NET_ID $FLOATING_RANGE "
subnet_params+="-- --enable_dhcp=False"
local id_and_ext_gw_ip
@@ -1310,7 +1335,6 @@
local subnet_params="--ip_version 6 "
subnet_params+="--gateway $IPV6_PUBLIC_NETWORK_GATEWAY "
subnet_params+="--name $IPV6_PUBLIC_SUBNET_NAME "
- subnet_params+="--subnetpool None "
subnet_params+="$EXT_NET_ID $IPV6_PUBLIC_RANGE "
subnet_params+="-- --enable_dhcp=False"
local ipv6_id_and_ext_gw_ip
diff --git a/lib/neutron_plugins/vmware_dvs b/lib/neutron_plugins/vmware_dvs
deleted file mode 100644
index 587d5a6..0000000
--- a/lib/neutron_plugins/vmware_dvs
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-
-# This file is needed so Q_PLUGIN=vmware_dvs will work.
-
-# FIXME(salv-orlando): This function should not be here, but unfortunately
-# devstack calls it before the external plugins are fetched
-function has_neutron_plugin_security_group {
- # 0 means True here
- return 0
-}
diff --git a/lib/neutron_plugins/vmware_nsx b/lib/neutron_plugins/vmware_nsx
deleted file mode 100644
index b6c1c9c..0000000
--- a/lib/neutron_plugins/vmware_nsx
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-
-# This file is needed so Q_PLUGIN=vmware_nsx will work.
-
-# FIXME(salv-orlando): This function should not be here, but unfortunately
-# devstack calls it before the external plugins are fetched
-function has_neutron_plugin_security_group {
- # 0 means True here
- return 0
-}
diff --git a/lib/neutron_plugins/vmware_nsx_v b/lib/neutron_plugins/vmware_nsx_v
deleted file mode 100644
index 3d33c65..0000000
--- a/lib/neutron_plugins/vmware_nsx_v
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-#
-# This file is needed so Q_PLUGIN=vmware_nsx_v will work.
-
-# FIXME(salv-orlando): This function should not be here, but unfortunately
-# devstack calls it before the external plugins are fetched
-function has_neutron_plugin_security_group {
- # 0 means True here
- return 0
-}
diff --git a/lib/neutron_plugins/vmware_nsx_v3 b/lib/neutron_plugins/vmware_nsx_v3
deleted file mode 100644
index 6d8a6e6..0000000
--- a/lib/neutron_plugins/vmware_nsx_v3
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/bin/bash
-
-# This file is needed so Q_PLUGIN=vmware_nsx_v3 will work.
-
-# FIXME(salv-orlando): This function should not be here, but unfortunately
-# devstack calls it before the external plugins are fetched
-function has_neutron_plugin_security_group {
- # 0 means True here
- return 0
-}
diff --git a/lib/nova b/lib/nova
index cce538d..fd458c5 100644
--- a/lib/nova
+++ b/lib/nova
@@ -404,8 +404,8 @@
#
# Project User Roles
# ------------------------------------------------------------------
-# SERVICE_TENANT_NAME nova admin
-# SERVICE_TENANT_NAME nova ResellerAdmin (if Swift is enabled)
+# SERVICE_PROJECT_NAME nova admin
+# SERVICE_PROJECT_NAME nova ResellerAdmin (if Swift is enabled)
function create_nova_accounts {
# Nova
@@ -444,7 +444,7 @@
if is_service_enabled swift; then
# Nova needs ResellerAdmin role to download images when accessing
# swift through the s3 api.
- get_or_add_user_project_role ResellerAdmin nova $SERVICE_TENANT_NAME
+ get_or_add_user_project_role ResellerAdmin nova $SERVICE_PROJECT_NAME
fi
fi
diff --git a/lib/swift b/lib/swift
index 947d2ab..b6c3ca4 100644
--- a/lib/swift
+++ b/lib/swift
@@ -450,7 +450,7 @@
auth_protocol = ${KEYSTONE_AUTH_PROTOCOL}
cafile = ${SSL_BUNDLE_FILE}
admin_user = swift
-admin_tenant_name = ${SERVICE_TENANT_NAME}
+admin_tenant_name = ${SERVICE_PROJECT_NAME}
admin_password = ${SERVICE_PASSWORD}
[filter:swift3]
@@ -812,7 +812,7 @@
# note we are using swift credentials!
OS_USERNAME=swift \
OS_PASSWORD=$SERVICE_PASSWORD \
- OS_PROJECT_NAME=$SERVICE_TENANT_NAME \
+ OS_PROJECT_NAME=$SERVICE_PROJECT_NAME \
openstack object store account \
set --property "Temp-URL-Key=$SWIFT_TEMPURL_KEY"
}
diff --git a/lib/tempest b/lib/tempest
index e90ff93..f75d755 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -321,7 +321,10 @@
local tmp_cfg_file
tmp_cfg_file=$(mktemp)
cd $TEMPEST_DIR
- tox -revenv -- tempest verify-config -u -r -o $tmp_cfg_file
+ tox -revenv --notest
+ # NOTE(mtreinish): Respect constraints on tempest verify-config venv
+ tox -evenv -- pip install -c $REQUIREMENTS_DIR/upper-constraints.txt -r requirements.txt
+ tox -evenv -- tempest verify-config -uro $tmp_cfg_file
local compute_api_extensions=${COMPUTE_API_EXTENSIONS:-"all"}
if [[ ! -z "$DISABLE_COMPUTE_API_EXTENSIONS" ]]; then
@@ -450,10 +453,6 @@
iniset $TEMPEST_CONFIG validation network_for_ssh $PRIVATE_NETWORK_NAME
# Volume
- # TODO(dkranz): Remove the bootable flag when Juno is end of life.
- iniset $TEMPEST_CONFIG volume-feature-enabled bootable True
- # TODO(jordanP): Remove the extend_with_snapshot flag when Juno is end of life.
- iniset $TEMPEST_CONFIG volume-feature-enabled extend_with_snapshot True
# TODO(obutenko): Remove the incremental_backup_force flag when Kilo and Juno is end of life.
iniset $TEMPEST_CONFIG volume-feature-enabled incremental_backup_force True
# TODO(ynesenenko): Remove the volume_services flag when Liberty and Kilo will correct work with host info.
@@ -588,6 +587,10 @@
pip_install tox
pushd $TEMPEST_DIR
tox --notest -efull
+ # NOTE(mtreinish) Respect constraints in the tempest full venv, things that
+ # are using a tox job other than full will not be respecting constraints but
+ # running pip install -U on tempest requirements
+ $TEMPEST_DIR/.tox/full/bin/pip install -c $REQUIREMENTS_DIR/upper-constraints.txt -r requirements.txt
PROJECT_VENV["tempest"]=${TEMPEST_DIR}/.tox/full
install_tempest_lib
popd
diff --git a/openrc b/openrc
index 9bc0fd7..460cf14 100644
--- a/openrc
+++ b/openrc
@@ -1,9 +1,9 @@
#!/usr/bin/env bash
#
-# source openrc [username] [tenantname]
+# source openrc [username] [projectname]
#
-# Configure a set of credentials for $TENANT/$USERNAME:
-# Set OS_TENANT_NAME to override the default tenant 'demo'
+# Configure a set of credentials for $PROJECT/$USERNAME:
+# Set OS_PROJECT_NAME to override the default project 'demo'
# Set OS_USERNAME to override the default user name 'demo'
# Set ADMIN_PASSWORD to set the password for 'admin' and 'demo'
@@ -14,7 +14,7 @@
OS_USERNAME=$1
fi
if [[ -n "$2" ]]; then
- OS_TENANT_NAME=$2
+ OS_PROJECT_NAME=$2
fi
# Find the other rc files
@@ -34,13 +34,17 @@
# Get some necessary configuration
source $RC_DIR/lib/tls
-# The introduction of Keystone to the OpenStack ecosystem has standardized the
-# term **tenant** as the entity that owns resources. In some places references
-# still exist to the original Nova term **project** for this use. Also,
-# **tenant_name** is preferred to **tenant_id**.
-export OS_TENANT_NAME=${OS_TENANT_NAME:-demo}
+# The OpenStack ecosystem has standardized the term **project** as the
+# entity that owns resources. In some places **tenant** remains
+# referenced, but in all cases this just means **project**. We will
+# warn if we need to turn on legacy **tenant** support to have a
+# working environment.
+export OS_PROJECT_NAME=${OS_PROJECT_NAME:-demo}
-# In addition to the owning entity (tenant), nova stores the entity performing
+echo "WARNING: setting legacy OS_TENANT_NAME to support cli tools."
+export OS_TENANT_NAME=$OS_PROJECT_NAME
+
+# In addition to the owning entity (project), nova stores the entity performing
# the action as the **user**.
export OS_USERNAME=${OS_USERNAME:-demo}
@@ -81,7 +85,7 @@
# Authenticating against an OpenStack cloud using Keystone returns a **Token**
# and **Service Catalog**. The catalog contains the endpoints for all services
-# the user/tenant has access to - including nova, glance, keystone, swift, ...
+# the user/project has access to - including nova, glance, keystone, swift, ...
# We currently recommend using the 2.0 *identity api*.
#
export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:5000/v${OS_IDENTITY_API_VERSION}
diff --git a/stack.sh b/stack.sh
index 6dddea4..0be3585 100755
--- a/stack.sh
+++ b/stack.sh
@@ -1210,7 +1210,7 @@
# Create an access key and secret key for Nova EC2 register image
if is_service_enabled keystone && is_service_enabled swift3 && is_service_enabled nova; then
- eval $(openstack ec2 credentials create --user nova --project $SERVICE_TENANT_NAME -f shell -c access -c secret)
+ eval $(openstack ec2 credentials create --user nova --project $SERVICE_PROJECT_NAME -f shell -c access -c secret)
iniset $NOVA_CONF DEFAULT s3_access_key "$access"
iniset $NOVA_CONF DEFAULT s3_secret_key "$secret"
iniset $NOVA_CONF DEFAULT s3_affix_tenant "True"
diff --git a/tools/worlddump.py b/tools/worlddump.py
index 198bb7e..d129374 100755
--- a/tools/worlddump.py
+++ b/tools/worlddump.py
@@ -101,14 +101,24 @@
_dump_cmd("sudo iptables --line-numbers -L -nv -t %s" % table)
+def _netns_list():
+ process = subprocess.Popen(['ip', 'netns'], stdout=subprocess.PIPE)
+ stdout, _ = process.communicate()
+ return stdout.split()
+
+
def network_dump():
_header("Network Dump")
_dump_cmd("brctl show")
_dump_cmd("arp -n")
- _dump_cmd("ip addr")
- _dump_cmd("ip link")
- _dump_cmd("ip route")
+ ip_cmds = ["addr", "link", "route"]
+ for cmd in ip_cmds + ['netns']:
+ _dump_cmd("ip %s" % cmd)
+ for netns_ in _netns_list():
+ for cmd in ip_cmds:
+ args = {'netns': netns_, 'cmd': cmd}
+ _dump_cmd('sudo ip netns exec %(netns)s ip %(cmd)s' % args)
def ovs_dump():
diff --git a/tox.ini b/tox.ini
index bdfd29d..ef557fb 100644
--- a/tox.ini
+++ b/tox.ini
@@ -12,7 +12,7 @@
# against devstack, just set BASHATE_INSTALL_PATH=/path/... to your
# modified bashate tree
deps =
- {env:BASHATE_INSTALL_PATH:bashate==0.3.2}
+ {env:BASHATE_INSTALL_PATH:bashate==0.4.0}
whitelist_externals = bash
commands = bash -c "find {toxinidir} \
-not \( -type d -name .?\* -prune \) \
@@ -21,6 +21,7 @@
-type f \
-not -name \*~ \
-not -name \*.md \
+ -not -name stack-screenrc \
\( \
-name \*.sh -or \
-name \*.orig -or \
diff --git a/unstack.sh b/unstack.sh
index d69e3f5..d7670e3 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -9,12 +9,12 @@
# Stop all processes by setting ``UNSTACK_ALL`` or specifying ``-a``
# on the command line
-UNSTACK_ALL=""
+UNSTACK_ALL=${UNSTACK_ALL:-""}
while getopts ":a" opt; do
case $opt in
a)
- UNSTACK_ALL=""
+ UNSTACK_ALL="-1"
;;
esac
done