Merge "Documentation: Using Neutron with DevStack"
diff --git a/MAINTAINERS.rst b/MAINTAINERS.rst
index d55135d..a376eb0 100644
--- a/MAINTAINERS.rst
+++ b/MAINTAINERS.rst
@@ -56,12 +56,6 @@
* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
* Fumihiko Kakuma <kakuma@valinux.co.jp>
-Ryu
-~~~
-
-* YAMAMOTO Takashi <yamamoto@valinux.co.jp>
-* Fumihiko Kakuma <kakuma@valinux.co.jp>
-
Sahara
~~~~~~
diff --git a/doc/source/changes.rst b/doc/source/changes.rst
index f4a326d..7b75375 100644
--- a/doc/source/changes.rst
+++ b/doc/source/changes.rst
@@ -3,7 +3,7 @@
=======
Recent Changes What's been happening?
--------------------------------------
+=====================================
These are the commits to DevStack for the last six months. For the
complete list see `the DevStack project in
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index 869908f..5157622 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -365,6 +365,35 @@
API_RATE_LIMIT=False
+IP Version
+ | Default: ``IP_VERSION=4``
+ | This setting can be used to configure DevStack to create either an IPv4,
+ IPv6, or dual stack tenant data network by setting ``IP_VERSION`` to
+ either ``IP_VERSION=4``, ``IP_VERSION=6``, or ``IP_VERSION=4+6``
+ respectively. This functionality requires that the Neutron networking
+ service is enabled by setting the following options:
+ |
+
+ ::
+
+ disable_service n-net
+ enable_service q-svc q-agt q-dhcp q-l3
+
+ | The following optional variables can be used to alter the default IPv6
+ behavior:
+ |
+
+ ::
+
+ IPV6_RA_MODE=slaac
+ IPV6_ADDRESS_MODE=slaac
+ FIXED_RANGE_V6=fd$IPV6_GLOBAL_ID::/64
+ IPV6_PRIVATE_NETWORK_GATEWAY=fd$IPV6_GLOBAL_ID::1
+
+ | *Note: ``FIXED_RANGE_V6`` and ``IPV6_PRIVATE_NETWORK_GATEWAY``
+ can be configured with any valid IPv6 prefix. The default values make
+ use of an auto-generated ``IPV6_GLOBAL_ID`` to comply with RFC 4193.*
+
Examples
========
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 7ca3d64..8c9e658 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -10,9 +10,9 @@
OpenStack project you are good to go.
Things To Know
-~~~~~~~~~~~~~~
+==============
-|
+|
| **Where Things Are**
The official DevStack repository is located at
@@ -30,7 +30,7 @@
is, however, used for all commits except for the text of this website.
That should also change in the near future.
-|
+|
| **HACKING.rst**
Like most OpenStack projects, DevStack includes a ``HACKING.rst`` file
@@ -38,7 +38,7 @@
``HACKING.rst`` is in the main DevStack repo it is considered
authoritative. Much of the content on this page is taken from there.
-|
+|
| **bashate Formatting**
Around the time of the OpenStack Havana release we added a tool to do
@@ -51,9 +51,9 @@
formatting. Run it on the entire project with ``./run_tests.sh``.
Code
-~~~~
+====
-|
+|
| **Repo Layout**
The DevStack repo generally keeps all of the primary scripts at the root
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index b7943ba..f39471c 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -7,7 +7,7 @@
- `Miscellaneous <#misc>`__
General Questions
-~~~~~~~~~~~~~~~~~
+=================
Q: Can I use DevStack for production?
A: No. We mean it. Really. DevStack makes some implementation
@@ -77,7 +77,7 @@
is valuable so we do it...
Operation and Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
+===========================
Q: Can DevStack handle a multi-node installation?
A: Indirectly, yes. You run DevStack on each node with the
@@ -157,7 +157,7 @@
``FORCE_PREREQ=1`` and the package checks will never be skipped.
Miscellaneous
-~~~~~~~~~~~~~
+=============
Q: ``tools/fixup_stuff.sh`` is broken and shouldn't 'fix' just one version of packages.
A: [Another not-a-question] No it isn't. Stuff in there is to
diff --git a/doc/source/guides/multinode-lab.rst b/doc/source/guides/multinode-lab.rst
index 4c60b6a..ff81c93 100644
--- a/doc/source/guides/multinode-lab.rst
+++ b/doc/source/guides/multinode-lab.rst
@@ -6,10 +6,10 @@
physical servers.
Prerequisites Linux & Network
------------------------------
+=============================
Minimal Install
-~~~~~~~~~~~~~~~
+---------------
You need to have a system with a fresh install of Linux. You can
download the `Minimal
@@ -27,7 +27,7 @@
apt-get install -y git sudo || yum install -y git sudo
Network Configuration
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
The first iteration of the lab uses OpenStack's FlatDHCP network
controller so only a single network will be required. It should be on
@@ -60,10 +60,10 @@
GATEWAY=192.168.42.1
Installation shake and bake
----------------------------
+===========================
Add the DevStack User
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
OpenStack runs as a non-root user that has sudo access to root. There is
nothing special about the name, we'll use ``stack`` here. Every node
@@ -88,7 +88,7 @@
``stack`` user.
Set Up Ssh
-~~~~~~~~~~
+----------
Set up the stack user on each node with an ssh key for access:
@@ -98,7 +98,7 @@
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyYjfgyPazTvGpd8OaAvtU2utL8W6gWC4JdRS1J95GhNNfQd657yO6s1AH5KYQWktcE6FO/xNUC2reEXSGC7ezy+sGO1kj9Limv5vrvNHvF1+wts0Cmyx61D2nQw35/Qz8BvpdJANL7VwP/cFI/p3yhvx2lsnjFE3hN8xRB2LtLUopUSVdBwACOVUmH2G+2BWMJDjVINd2DPqRIA4Zhy09KJ3O1Joabr0XpQL0yt/I9x8BVHdAx6l9U0tMg9dj5+tAjZvMAFfye3PJcYwwsfJoFxC8w/SLtqlFX7Ehw++8RtvomvuipLdmWCy+T9hIkl+gHYE4cS3OIqXH7f49jdJf jesse@spacey.local" > ~/.ssh/authorized_keys
Download DevStack
-~~~~~~~~~~~~~~~~~
+-----------------
Grab the latest version of DevStack:
@@ -112,7 +112,7 @@
(aka 'head node') and the compute nodes.
Configure Cluster Controller
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------
The cluster controller runs all OpenStack services. Configure the
cluster controller's DevStack in ``local.conf``:
@@ -153,7 +153,7 @@
available in ``stack.sh.log``.
Configure Compute Nodes
-~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------
The compute nodes only run the OpenStack worker services. For additional
machines, create a ``local.conf`` with:
@@ -196,7 +196,7 @@
available in ``stack.sh.log``.
Cleaning Up After DevStack
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------
Shutting down OpenStack is now as simple as running the included
``unstack.sh`` script:
@@ -223,10 +223,10 @@
sudo virsh list | grep inst | awk '{print $1}' | xargs -n1 virsh destroy
Options pimp your stack
------------------------
+=======================
Additional Users
-~~~~~~~~~~~~~~~~
+----------------
DevStack creates two OpenStack users (``admin`` and ``demo``) and two
tenants (also ``admin`` and ``demo``). ``admin`` is exactly what it
@@ -242,7 +242,7 @@
# Get admin creds
. openrc admin admin
-
+
# List existing tenants
keystone tenant-list
@@ -260,7 +260,7 @@
# keystone role-list
Swift
-~~~~~
+-----
Swift requires a significant amount of resources and is disabled by
default in DevStack. The support in DevStack is geared toward a minimal
@@ -275,12 +275,12 @@
Swift will put its data files in ``SWIFT_DATA_DIR`` (default
``/opt/stack/data/swift``). The size of the data 'partition' created
(really a loop-mounted file) is set by ``SWIFT_LOOPBACK_DISK_SIZE``. The
-Swift config files are located in ``SWIFT_CONFIG_DIR`` (default
+Swift config files are located in ``SWIFT_CONF_DIR`` (default
``/etc/swift``). All of these settings can be overridden in (wait for
it...) ``local.conf``.
Volumes
-~~~~~~~
+-------
DevStack will automatically use an existing LVM volume group named
``stack-volumes`` to store cloud-created volumes. If ``stack-volumes``
@@ -305,7 +305,7 @@
vgcreate stack-volumes /dev/sdc
Syslog
-~~~~~~
+------
DevStack is capable of using ``rsyslog`` to aggregate logging across the
cluster. It is off by default; to turn it on set ``SYSLOG=True`` in
@@ -319,7 +319,7 @@
SYSLOG_HOST=192.168.42.11
Using Alternate Repositories/Branches
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------------------
The git repositories for all of the OpenStack services are defined in
``stackrc``. Since this file is a part of the DevStack package changes
@@ -349,10 +349,10 @@
GLANCE_REPO=https://github.com/mcuser/glance.git
Notes stuff you might need to know
-----------------------------------
+==================================
Reset the Bridge
-~~~~~~~~~~~~~~~~
+----------------
How to reset the bridge configuration:
@@ -363,7 +363,7 @@
sudo brctl delbr br100
Set MySQL Password
-~~~~~~~~~~~~~~~~~~
+------------------
If you forgot to set the root password you can do this:
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index a7a1099..17e9b9e 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -7,10 +7,10 @@
with hardware.
Prerequisites Linux & Network
------------------------------
+=============================
Minimal Install
-~~~~~~~~~~~~~~~
+---------------
You need to have a system with a fresh install of Linux. You can
download the `Minimal
@@ -25,7 +25,7 @@
the interface(s) that OpenStack uses for bridging.
Network Configuration
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
Determine the network configuration on the interface used to integrate
your OpenStack cloud with your existing network. For example, if the IPs
@@ -36,10 +36,10 @@
of DHCP (i.e. 192.168.1.201).
Installation shake and bake
----------------------------
+===========================
Add your user
-~~~~~~~~~~~~~
+-------------
We need to add a user to install DevStack. (if you created a user during
install you can skip this step and just give the user sudo privileges
@@ -61,7 +61,7 @@
**login** as that user.
Download DevStack
-~~~~~~~~~~~~~~~~~
+-----------------
We'll grab the latest version of DevStack via https:
@@ -72,7 +72,7 @@
cd devstack
Run DevStack
-~~~~~~~~~~~~
+------------
Now to configure ``stack.sh``. DevStack includes a sample in
``devstack/samples/local.conf``. Create ``local.conf`` as shown below to
@@ -120,7 +120,7 @@
accounts and passwords to poke at your shiny new OpenStack.
Using OpenStack
-~~~~~~~~~~~~~~~
+---------------
At this point you should be able to access the dashboard from other
computers on the local network. In this example that would be
diff --git a/doc/source/guides/single-vm.rst b/doc/source/guides/single-vm.rst
index ef59953..a41c4e1 100644
--- a/doc/source/guides/single-vm.rst
+++ b/doc/source/guides/single-vm.rst
@@ -9,16 +9,16 @@
operation. Speed not required.
Prerequisites Cloud & Image
----------------------------
+===========================
Virtual Machine
-~~~~~~~~~~~~~~~
+---------------
DevStack should run in any virtual machine running a supported Linux
release. It will perform best with 2Gb or more of RAM.
OpenStack Deployment & cloud-init
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+---------------------------------
If the cloud service has an image with ``cloud-init`` pre-installed, use
it. You can get one from `Ubuntu's Daily
@@ -33,10 +33,10 @@
bare-bones server installation.
Installation shake and bake
----------------------------
+===========================
Launching With Cloud-Init
-~~~~~~~~~~~~~~~~~~~~~~~~~
+-------------------------
This cloud config grabs the latest version of DevStack via git, creates
a minimal ``local.conf`` file and kicks off ``stack.sh``. It should be
@@ -79,13 +79,13 @@
to create a non-root user and run the ``start.sh`` script as that user.
Launching By Hand
-~~~~~~~~~~~~~~~~~
+-----------------
Using a hypervisor directly, launch the VM and either manually perform
the steps in the embedded shell script above or copy it into the VM.
Using OpenStack
-~~~~~~~~~~~~~~~
+---------------
At this point you should be able to access the dashboard. Launch VMs and
if you give them floating IPs access those VMs from other machines on
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 4078240..23ccf27 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -13,10 +13,10 @@
"tested") going forward.
Supported Components
---------------------
+====================
Base OS
-~~~~~~~
+-------
*The OpenStack Technical Committee (TC) has defined the current CI
strategy to include the latest Ubuntu release and the latest RHEL
@@ -33,7 +33,7 @@
side-effects on other OS platforms.
Databases
-~~~~~~~~~
+---------
*As packaged by the host OS*
@@ -41,7 +41,7 @@
- PostgreSQL
Queues
-~~~~~~
+------
*As packaged by the host OS*
@@ -49,14 +49,14 @@
- Qpid
Web Server
-~~~~~~~~~~
+----------
*As packaged by the host OS*
- Apache
OpenStack Network
-~~~~~~~~~~~~~~~~~
+-----------------
*Default to Nova Network, optionally use Neutron*
@@ -65,7 +65,7 @@
mode using linuxbridge or OpenVSwitch.
Services
-~~~~~~~~
+--------
The default services configured by DevStack are Identity (Keystone),
Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder),
@@ -77,14 +77,14 @@
scripts that perform the configuration and startup of the service.
Node Configurations
-~~~~~~~~~~~~~~~~~~~
+-------------------
- single node
- multi-node is not tested regularly by the core team, and even then
only minimal configurations are reviewed
Exercises
-~~~~~~~~~
+---------
The DevStack exercise scripts are no longer used as integration and gate
testing as that job has transitioned to Tempest. They are still
diff --git a/doc/source/plugins.rst b/doc/source/plugins.rst
index b4136c4..485cd0f 100644
--- a/doc/source/plugins.rst
+++ b/doc/source/plugins.rst
@@ -6,7 +6,7 @@
support for additional projects and features.
Extras.d Hooks
-~~~~~~~~~~~~~~
+==============
These hooks are an extension of the service calls in
``stack.sh`` at specific points in its run, plus ``unstack.sh`` and
@@ -93,7 +93,7 @@
but after ``unstack.sh`` has been called.
Hypervisor
-~~~~~~~~~~
+==========
Hypervisor plugins are fairly new and condense most hypervisor
configuration into one place.
diff --git a/lib/glance b/lib/glance
index 04c088a..0c1045f 100644
--- a/lib/glance
+++ b/lib/glance
@@ -28,7 +28,7 @@
# Set up default directories
GITDIR["python-glanceclient"]=$DEST/python-glanceclient
-GIRDIR["glance_store"]=$DEST/glance_store
+GITDIR["glance_store"]=$DEST/glance_store
GLANCE_DIR=$DEST/glance
GLANCE_CACHE_DIR=${GLANCE_CACHE_DIR:=$DATA_DIR/glance/cache}
diff --git a/lib/ironic b/lib/ironic
index 98619c5..f2b1fb2 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -93,7 +93,7 @@
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz}
# Which deploy driver to use - valid choices right now
-# are 'pxe_ssh', 'pxe_ipmitool' and 'agent_ssh'.
+# are 'pxe_ssh', 'pxe_ipmitool', 'agent_ssh' and 'agent_ipmitool'.
IRONIC_DEPLOY_DRIVER=${IRONIC_DEPLOY_DRIVER:-pxe_ssh}
#TODO(agordeev): replace 'ubuntu' with host distro name getting
@@ -152,6 +152,11 @@
return 1
}
+function is_deployed_by_agent {
+ [[ -z "${IRONIC_DEPLOY_DRIVER%%agent*}" ]] && return 0
+ return 1
+}
+
# install_ironic() - Collect source and prepare
function install_ironic {
# make sure all needed service were enabled
@@ -307,7 +312,7 @@
if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then
iniset $IRONIC_CONF_FILE pxe pxe_append_params "nofb nomodeset vga=normal console=ttyS0"
fi
- if [[ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]] ; then
+ if is_deployed_by_agent; then
if [[ "$SWIFT_ENABLE_TEMPURLS" == "True" ]] ; then
iniset $IRONIC_CONF_FILE glance swift_temp_url_key $SWIFT_TEMPURL_KEY
else
@@ -510,7 +515,7 @@
if [[ "$IRONIC_DEPLOY_DRIVER" == "pxe_ssh" ]] ; then
local _IRONIC_DEPLOY_KERNEL_KEY=pxe_deploy_kernel
local _IRONIC_DEPLOY_RAMDISK_KEY=pxe_deploy_ramdisk
- elif [[ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]] ; then
+ elif is_deployed_by_agent; then
local _IRONIC_DEPLOY_KERNEL_KEY=deploy_kernel
local _IRONIC_DEPLOY_RAMDISK_KEY=deploy_ramdisk
fi
@@ -552,6 +557,10 @@
# we create the bare metal flavor with minimum value
local node_options="-i ipmi_address=$ipmi_address -i ipmi_password=$ironic_ipmi_passwd\
-i ipmi_username=$ironic_ipmi_username"
+ if is_deployed_by_agent; then
+ node_options+=" -i $_IRONIC_DEPLOY_KERNEL_KEY=$IRONIC_DEPLOY_KERNEL_ID"
+ node_options+=" -i $_IRONIC_DEPLOY_RAMDISK_KEY=$IRONIC_DEPLOY_RAMDISK_ID"
+ fi
fi
local node_id=$(ironic node-create --chassis_uuid $chassis_id \
@@ -589,7 +598,7 @@
# nodes boot from TFTP and callback to the API server listening on $HOST_IP
sudo iptables -I INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true
sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true
- if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+ if is_deployed_by_agent; then
# agent ramdisk gets instance image from swift
sudo iptables -I INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true
fi
@@ -665,8 +674,8 @@
fi
if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then
- local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy.kernel
- local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy.initramfs
+ local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.kernel
+ local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER.initramfs
else
local IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL
local IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK
@@ -677,17 +686,17 @@
if [ "$IRONIC_BUILD_DEPLOY_RAMDISK" = "True" ]; then
# we can build them only if we're not offline
if [ "$OFFLINE" != "True" ]; then
- if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+ if is_deployed_by_agent; then
build_ipa_coreos_ramdisk $IRONIC_DEPLOY_KERNEL_PATH $IRONIC_DEPLOY_RAMDISK_PATH
else
ramdisk-image-create $IRONIC_DEPLOY_FLAVOR \
- -o $TOP_DIR/files/ir-deploy
+ -o $TOP_DIR/files/ir-deploy-$IRONIC_DEPLOY_DRIVER
fi
else
die $LINENO "Deploy kernel+ramdisk files don't exist and cannot be build in OFFLINE mode"
fi
else
- if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+ if is_deployed_by_agent; then
# download the agent image tarball
wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_DEPLOY_KERNEL_PATH
wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_DEPLOY_RAMDISK_PATH
@@ -751,7 +760,7 @@
restart_service xinetd
sudo iptables -D INPUT -d $HOST_IP -p udp --dport 69 -j ACCEPT || true
sudo iptables -D INPUT -d $HOST_IP -p tcp --dport $IRONIC_SERVICE_PORT -j ACCEPT || true
- if [ "$IRONIC_DEPLOY_DRIVER" == "agent_ssh" ]; then
+ if is_deployed_by_agent; then
# agent ramdisk gets instance image from swift
sudo iptables -D INPUT -d $HOST_IP -p tcp --dport ${SWIFT_DEFAULT_BIND_PORT:-8080} -j ACCEPT || true
fi
diff --git a/lib/neutron b/lib/neutron
index db6bd47..09d9354 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -51,10 +51,22 @@
#
# With Neutron networking the NETWORK_MANAGER variable is ignored.
+# Settings
+# --------
+
+# Timeout value in seconds to wait for IPv6 gateway configuration
+GATEWAY_TIMEOUT=30
+
# Neutron Network Configuration
# -----------------------------
+# Subnet IP version
+IP_VERSION=${IP_VERSION:-4}
+# Validate IP_VERSION
+if [[ $IP_VERSION != "4" ]] && [[ $IP_VERSION != "6" ]] && [[ $IP_VERSION != "4+6" ]]; then
+ die $LINENO "IP_VERSION must be either 4, 6, or 4+6"
+fi
# Gateway and subnet defaults, in case they are not customized in localrc
NETWORK_GATEWAY=${NETWORK_GATEWAY:-10.0.0.1}
PUBLIC_NETWORK_GATEWAY=${PUBLIC_NETWORK_GATEWAY:-172.24.4.1}
@@ -65,6 +77,22 @@
Q_PROTOCOL="https"
fi
+# Generate 40-bit IPv6 Global ID to comply with RFC 4193
+IPV6_GLOBAL_ID=`uuidgen | sed s/-//g | cut -c 23- | sed -e "s/\(..\)\(....\)\(....\)/\1:\2:\3/"`
+
+# IPv6 gateway and subnet defaults, in case they are not customized in localrc
+IPV6_RA_MODE=${IPV6_RA_MODE:-slaac}
+IPV6_ADDRESS_MODE=${IPV6_ADDRESS_MODE:-slaac}
+IPV6_PUBLIC_SUBNET_NAME=${IPV6_PUBLIC_SUBNET_NAME:-ipv6-public-subnet}
+IPV6_PRIVATE_SUBNET_NAME=${IPV6_PRIVATE_SUBNET_NAME:-ipv6-private-subnet}
+FIXED_RANGE_V6=${FIXED_RANGE_V6:-fd$IPV6_GLOBAL_ID::/64}
+IPV6_PRIVATE_NETWORK_GATEWAY=${IPV6_PRIVATE_NETWORK_GATEWAY:-fd$IPV6_GLOBAL_ID::1}
+IPV6_PUBLIC_RANGE=${IPV6_PUBLIC_RANGE:-fe80:cafe:cafe::/64}
+IPV6_PUBLIC_NETWORK_GATEWAY=${IPV6_PUBLIC_NETWORK_GATEWAY:-fe80:cafe:cafe::2}
+# IPV6_ROUTER_GW_IP must be defined when IP_VERSION=4+6 as it cannot be
+# obtained conventionally until the l3-agent has support for dual-stack
+# TODO (john-davidge) Remove once l3-agent supports dual-stack
+IPV6_ROUTER_GW_IP=${IPV6_ROUTER_GW_IP:-fe80:cafe:cafe::1}
# Set up default directories
GITDIR["python-neutronclient"]=$DEST/python-neutronclient
@@ -531,8 +559,16 @@
else
NET_ID=$(neutron net-create --tenant-id $TENANT_ID "$PRIVATE_NETWORK_NAME" | grep ' id ' | get_field 2)
die_if_not_set $LINENO NET_ID "Failure creating NET_ID for $PHYSICAL_NETWORK $TENANT_ID"
- SUBNET_ID=$(neutron subnet-create --tenant-id $TENANT_ID --ip_version 4 --gateway $NETWORK_GATEWAY --name $PRIVATE_SUBNET_NAME $NET_ID $FIXED_RANGE | grep ' id ' | get_field 2)
- die_if_not_set $LINENO SUBNET_ID "Failure creating SUBNET_ID for $TENANT_ID"
+
+ if [[ "$IP_VERSION" =~ 4.* ]]; then
+ # Create IPv4 private subnet
+ SUBNET_ID=$(_neutron_create_private_subnet_v4)
+ fi
+
+ if [[ "$IP_VERSION" =~ .*6 ]]; then
+ # Create IPv6 private subnet
+ IPV6_SUBNET_ID=$(_neutron_create_private_subnet_v6)
+ fi
fi
if [[ "$Q_L3_ENABLED" == "True" ]]; then
@@ -546,7 +582,7 @@
ROUTER_ID=$(neutron router-create $Q_ROUTER_NAME | grep ' id ' | get_field 2)
die_if_not_set $LINENO ROUTER_ID "Failure creating ROUTER_ID for $Q_ROUTER_NAME"
fi
- neutron router-interface-add $ROUTER_ID $SUBNET_ID
+
# Create an external network, and a subnet. Configure the external network as router gw
if [ "$Q_USE_PROVIDERNET_FOR_PUBLIC" = "True" ]; then
EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True --provider:network_type=flat --provider:physical_network=${PUBLIC_PHYSICAL_NETWORK} | grep ' id ' | get_field 2)
@@ -554,35 +590,15 @@
EXT_NET_ID=$(neutron net-create "$PUBLIC_NETWORK_NAME" -- --router:external=True | grep ' id ' | get_field 2)
fi
die_if_not_set $LINENO EXT_NET_ID "Failure creating EXT_NET_ID for $PUBLIC_NETWORK_NAME"
- EXT_GW_IP=$(neutron subnet-create --ip_version 4 ${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} --gateway $PUBLIC_NETWORK_GATEWAY --name $PUBLIC_SUBNET_NAME $EXT_NET_ID $FLOATING_RANGE -- --enable_dhcp=False | grep 'gateway_ip' | get_field 2)
- die_if_not_set $LINENO EXT_GW_IP "Failure creating EXT_GW_IP"
- neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
- if is_service_enabled q-l3; then
- # logic is specific to using the l3-agent for l3
- if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
- local ext_gw_interface
+ if [[ "$IP_VERSION" =~ 4.* ]]; then
+ # Configure router for IPv4 public access
+ _neutron_configure_router_v4
+ fi
- if [[ "$Q_USE_PUBLIC_VETH" = "True" ]]; then
- ext_gw_interface=$Q_PUBLIC_VETH_EX
- else
- # Disable in-band as we are going to use local port
- # to communicate with VMs
- sudo ovs-vsctl set Bridge $PUBLIC_BRIDGE \
- other_config:disable-in-band=true
- ext_gw_interface=$PUBLIC_BRIDGE
- fi
- CIDR_LEN=${FLOATING_RANGE#*/}
- sudo ip addr add $EXT_GW_IP/$CIDR_LEN dev $ext_gw_interface
- sudo ip link set $ext_gw_interface up
- ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' '{ print $8; }'`
- die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
- sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
- fi
- if [[ "$Q_USE_NAMESPACE" == "False" ]]; then
- # Explicitly set router id in l3 agent configuration
- iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
- fi
+ if [[ "$IP_VERSION" =~ .*6 ]]; then
+ # Configure router for IPv6 public access
+ _neutron_configure_router_v6
fi
fi
}
@@ -785,7 +801,7 @@
iniset $NEUTRON_CONF database connection `database_connection_url $Q_DB_NAME`
iniset $NEUTRON_CONF DEFAULT state_path $DATA_DIR/neutron
-
+ iniset $NEUTRON_CONF DEFAULT use_syslog $SYSLOG
# If addition config files are set, make sure their path name is set as well
if [[ ${#Q_PLUGIN_EXTRA_CONF_FILES[@]} > 0 && $Q_PLUGIN_EXTRA_CONF_PATH == '' ]]; then
die $LINENO "Neutron additional plugin config not set.. exiting"
@@ -1065,6 +1081,172 @@
neutron_plugin_setup_interface_driver $1
}
+# Create private IPv4 subnet
+function _neutron_create_private_subnet_v4 {
+ local subnet_params="--tenant-id $TENANT_ID "
+ subnet_params+="--ip_version 4 "
+ subnet_params+="--gateway $NETWORK_GATEWAY "
+ subnet_params+="--name $PRIVATE_SUBNET_NAME "
+ subnet_params+="$NET_ID $FIXED_RANGE"
+ local subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
+ die_if_not_set $LINENO subnet_id "Failure creating private IPv4 subnet for $TENANT_ID"
+ echo $subnet_id
+}
+
+# Create private IPv6 subnet
+function _neutron_create_private_subnet_v6 {
+ die_if_not_set $LINENO IPV6_RA_MODE "IPV6 RA Mode not set"
+ die_if_not_set $LINENO IPV6_ADDRESS_MODE "IPV6 Address Mode not set"
+ local ipv6_modes="--ipv6-ra-mode $IPV6_RA_MODE --ipv6-address-mode $IPV6_ADDRESS_MODE"
+ local subnet_params="--tenant-id $TENANT_ID "
+ subnet_params+="--ip_version 6 "
+ subnet_params+="--gateway $IPV6_PRIVATE_NETWORK_GATEWAY "
+ subnet_params+="--name $IPV6_PRIVATE_SUBNET_NAME "
+ subnet_params+="$NET_ID $FIXED_RANGE_V6 $ipv6_modes"
+ local ipv6_subnet_id=$(neutron subnet-create $subnet_params | grep ' id ' | get_field 2)
+ die_if_not_set $LINENO ipv6_subnet_id "Failure creating private IPv6 subnet for $TENANT_ID"
+ echo $ipv6_subnet_id
+}
+
+# Create public IPv4 subnet
+function _neutron_create_public_subnet_v4 {
+ local subnet_params+="--ip_version 4 "
+ subnet_params+="${Q_FLOATING_ALLOCATION_POOL:+--allocation-pool $Q_FLOATING_ALLOCATION_POOL} "
+ subnet_params+="--gateway $PUBLIC_NETWORK_GATEWAY "
+ subnet_params+="--name $PUBLIC_SUBNET_NAME "
+ subnet_params+="$EXT_NET_ID $FLOATING_RANGE "
+ subnet_params+="-- --enable_dhcp=False"
+ local id_and_ext_gw_ip=$(neutron subnet-create $subnet_params | grep -e 'gateway_ip' -e ' id ')
+ die_if_not_set $LINENO id_and_ext_gw_ip "Failure creating public IPv4 subnet"
+ echo $id_and_ext_gw_ip
+}
+
+# Create public IPv6 subnet
+function _neutron_create_public_subnet_v6 {
+ local subnet_params="--ip_version 6 "
+ subnet_params+="--gateway $IPV6_PUBLIC_NETWORK_GATEWAY "
+ subnet_params+="--name $IPV6_PUBLIC_SUBNET_NAME "
+ subnet_params+="$EXT_NET_ID $IPV6_PUBLIC_RANGE "
+ subnet_params+="-- --enable_dhcp=False"
+ local ipv6_id_and_ext_gw_ip=$(neutron subnet-create $subnet_params | grep -e 'gateway_ip' -e ' id ')
+ die_if_not_set $LINENO ipv6_id_and_ext_gw_ip "Failure creating an IPv6 public subnet"
+ echo $ipv6_id_and_ext_gw_ip
+}
+
+# Configure neutron router for IPv4 public access
+function _neutron_configure_router_v4 {
+ neutron router-interface-add $ROUTER_ID $SUBNET_ID
+ # Create a public subnet on the external network
+ local id_and_ext_gw_ip=$(_neutron_create_public_subnet_v4 $EXT_NET_ID)
+ local ext_gw_ip=$(echo $id_and_ext_gw_ip | get_field 2)
+ PUB_SUBNET_ID=$(echo $id_and_ext_gw_ip | get_field 5)
+ # Configure the external network as the default router gateway
+ neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
+
+ # This logic is specific to using the l3-agent for layer 3
+ if is_service_enabled q-l3; then
+ # Configure and enable public bridge
+ if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
+ local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+ local cidr_len=${FLOATING_RANGE#*/}
+ sudo ip addr add $ext_gw_ip/$cidr_len dev $ext_gw_interface
+ sudo ip link set $ext_gw_interface up
+ ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $8; }'`
+ die_if_not_set $LINENO ROUTER_GW_IP "Failure retrieving ROUTER_GW_IP"
+ sudo route add -net $FIXED_RANGE gw $ROUTER_GW_IP
+ fi
+ _neutron_set_router_id
+ fi
+}
+
+# Configure neutron router for IPv6 public access
+function _neutron_configure_router_v6 {
+ neutron router-interface-add $ROUTER_ID $IPV6_SUBNET_ID
+ # Create a public subnet on the external network
+ local ipv6_id_and_ext_gw_ip=$(_neutron_create_public_subnet_v6 $EXT_NET_ID)
+ local ipv6_ext_gw_ip=$(echo $ipv6_id_and_ext_gw_ip | get_field 2)
+ local ipv6_pub_subnet_id=$(echo $ipv6_id_and_ext_gw_ip | get_field 5)
+
+ # If the external network has not already been set as the default router
+ # gateway when configuring an IPv4 public subnet, do so now
+ if [[ "$IP_VERSION" == "6" ]]; then
+ neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
+ fi
+
+ # This logic is specific to using the l3-agent for layer 3
+ if is_service_enabled q-l3; then
+ local ipv6_router_gw_port
+ # Ensure IPv6 forwarding is enabled on the host
+ sudo sysctl -w net.ipv6.conf.all.forwarding=1
+ # Configure and enable public bridge
+ if [[ "$IP_VERSION" = "6" ]]; then
+ # Override global IPV6_ROUTER_GW_IP with the true value from neutron
+ IPV6_ROUTER_GW_IP=`neutron port-list -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $8; }'`
+ die_if_not_set $LINENO IPV6_ROUTER_GW_IP "Failure retrieving IPV6_ROUTER_GW_IP"
+ ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$ipv6_pub_subnet_id '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
+ die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
+ else
+ ipv6_router_gw_port=`neutron port-list -c id -c fixed_ips -c device_owner | grep router_gateway | awk -F '"' -v subnet_id=$PUB_SUBNET_ID '$4 == subnet_id { print $1; }' | awk -F ' | ' '{ print $2; }'`
+ die_if_not_set $LINENO ipv6_router_gw_port "Failure retrieving ipv6_router_gw_port"
+ fi
+
+ # The ovs_base_configure_l3_agent function flushes the public
+ # bridge's ip addresses, so turn IPv6 support in the host off
+ # and then on to recover the public bridge's link local address
+ sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=1
+ sudo sysctl -w net.ipv6.conf.${PUBLIC_BRIDGE}.disable_ipv6=0
+ if is_neutron_ovs_base_plugin && [[ "$Q_USE_NAMESPACE" = "True" ]]; then
+ local ext_gw_interface=$(_neutron_get_ext_gw_interface)
+ local ipv6_cidr_len=${IPV6_PUBLIC_RANGE#*/}
+
+ # Define router_ns based on whether DVR is enabled
+ local router_ns=qrouter
+ if [[ "$Q_DVR_MODE" == "dvr_snat" ]]; then
+ router_ns=snat
+ fi
+
+ # Configure interface for public bridge
+ sudo ip -6 addr add $ipv6_ext_gw_ip/$ipv6_cidr_len dev $ext_gw_interface
+
+ # Wait until layer 3 agent has configured the gateway port on
+ # the public bridge, then add gateway address to the interface
+ # TODO (john-davidge) Remove once l3-agent supports dual-stack
+ if [[ "$IP_VERSION" == "4+6" ]]; then
+ if ! timeout $GATEWAY_TIMEOUT sh -c "until sudo ip netns exec $router_ns-$ROUTER_ID ip addr show qg-${ipv6_router_gw_port:0:11} | grep $ROUTER_GW_IP; do sleep 1; done"; then
+ die $LINENO "Timeout retrieving ROUTER_GW_IP"
+ fi
+ # Configure the gateway port with the public IPv6 adress
+ sudo ip netns exec $router_ns-$ROUTER_ID ip -6 addr add $IPV6_ROUTER_GW_IP/$ipv6_cidr_len dev qg-${ipv6_router_gw_port:0:11}
+ # Add a default IPv6 route to the neutron router as the
+ # l3-agent does not add one in the dual-stack case
+ sudo ip netns exec $router_ns-$ROUTER_ID ip -6 route replace default via $ipv6_ext_gw_ip dev qg-${ipv6_router_gw_port:0:11}
+ fi
+ sudo ip -6 route add $FIXED_RANGE_V6 via $IPV6_ROUTER_GW_IP dev $ext_gw_interface
+ fi
+ _neutron_set_router_id
+ fi
+}
+
+# Explicitly set router id in l3 agent configuration
+function _neutron_set_router_id {
+ if [[ "$Q_USE_NAMESPACE" == "False" ]]; then
+ iniset $Q_L3_CONF_FILE DEFAULT router_id $ROUTER_ID
+ fi
+}
+
+# Get ext_gw_interface depending on value of Q_USE_PUBLIC_VETH
+function _neutron_get_ext_gw_interface {
+ if [[ "$Q_USE_PUBLIC_VETH" == "True" ]]; then
+ echo $Q_PUBLIC_VETH_EX
+ else
+ # Disable in-band as we are going to use local port
+ # to communicate with VMs
+ sudo ovs-vsctl set Bridge $PUBLIC_BRIDGE \
+ other_config:disable-in-band=true
+ echo $PUBLIC_BRIDGE
+ fi
+}
+
# Functions for Neutron Exercises
#--------------------------------
diff --git a/lib/neutron_plugins/ofagent_agent b/lib/neutron_plugins/ofagent_agent
index 55f3f72..90b37f1 100644
--- a/lib/neutron_plugins/ofagent_agent
+++ b/lib/neutron_plugins/ofagent_agent
@@ -18,7 +18,6 @@
# This agent uses ryu to talk with switches
install_package $(get_packages "ryu")
install_ryu
- configure_ryu
}
function neutron_plugin_configure_debug_command {
diff --git a/lib/neutron_plugins/ryu b/lib/neutron_plugins/ryu
deleted file mode 100644
index f45a797..0000000
--- a/lib/neutron_plugins/ryu
+++ /dev/null
@@ -1,79 +0,0 @@
-# Neutron Ryu plugin
-# ------------------
-
-# Save trace setting
-RYU_XTRACE=$(set +o | grep xtrace)
-set +o xtrace
-
-source $TOP_DIR/lib/neutron_plugins/ovs_base
-source $TOP_DIR/lib/neutron_thirdparty/ryu # for configuration value
-
-function neutron_plugin_create_nova_conf {
- _neutron_ovs_base_configure_nova_vif_driver
- iniset $NOVA_CONF DEFAULT libvirt_ovs_integration_bridge "$OVS_BRIDGE"
-}
-
-function neutron_plugin_install_agent_packages {
- _neutron_ovs_base_install_agent_packages
-
- # neutron_ryu_agent requires ryu module
- install_package $(get_packages "ryu")
- install_ryu
- configure_ryu
-}
-
-function neutron_plugin_configure_common {
- Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ryu
- Q_PLUGIN_CONF_FILENAME=ryu.ini
- Q_PLUGIN_CLASS="neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2"
-}
-
-function neutron_plugin_configure_debug_command {
- _neutron_ovs_base_configure_debug_command
- iniset $NEUTRON_TEST_CONFIG_FILE DEFAULT ryu_api_host $RYU_API_HOST:$RYU_API_PORT
-}
-
-function neutron_plugin_configure_dhcp_agent {
- iniset $Q_DHCP_CONF_FILE DEFAULT ryu_api_host $RYU_API_HOST:$RYU_API_PORT
-}
-
-function neutron_plugin_configure_l3_agent {
- iniset $Q_L3_CONF_FILE DEFAULT ryu_api_host $RYU_API_HOST:$RYU_API_PORT
- _neutron_ovs_base_configure_l3_agent
-}
-
-function neutron_plugin_configure_plugin_agent {
- # Set up integration bridge
- _neutron_ovs_base_setup_bridge $OVS_BRIDGE
- if [ -n "$RYU_INTERNAL_INTERFACE" ]; then
- sudo ovs-vsctl --no-wait -- --may-exist add-port $OVS_BRIDGE $RYU_INTERNAL_INTERFACE
- fi
- iniset /$Q_PLUGIN_CONF_FILE ovs integration_bridge $OVS_BRIDGE
- AGENT_BINARY="$NEUTRON_DIR/neutron/plugins/ryu/agent/ryu_neutron_agent.py"
-
- _neutron_ovs_base_configure_firewall_driver
-}
-
-function neutron_plugin_configure_service {
- iniset /$Q_PLUGIN_CONF_FILE ovs openflow_rest_api $RYU_API_HOST:$RYU_API_PORT
-
- _neutron_ovs_base_configure_firewall_driver
-}
-
-function neutron_plugin_setup_interface_driver {
- local conf_file=$1
- iniset $conf_file DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
- iniset $conf_file DEFAULT ovs_use_veth True
-}
-
-function has_neutron_plugin_security_group {
- # 0 means True here
- return 0
-}
-
-function neutron_plugin_check_adv_test_requirements {
- is_service_enabled q-agt && is_service_enabled q-dhcp && return 0
-}
-
-# Restore xtrace
-$RYU_XTRACE
diff --git a/lib/neutron_thirdparty/ryu b/lib/neutron_thirdparty/ryu
index 233f3aa..eaf088b 100644
--- a/lib/neutron_thirdparty/ryu
+++ b/lib/neutron_thirdparty/ryu
@@ -1,56 +1,15 @@
-# Ryu OpenFlow Controller
-# -----------------------
+# Ryu SDN Framework
+# -----------------
+
+# Used by ofagent.
+# TODO(yamamoto): Switch to pip_install once the development was settled
# Save trace setting
RYU3_XTRACE=$(set +o | grep xtrace)
set +o xtrace
-
RYU_DIR=$DEST/ryu
-# Ryu API Host
-RYU_API_HOST=${RYU_API_HOST:-127.0.0.1}
-# Ryu API Port
-RYU_API_PORT=${RYU_API_PORT:-8080}
-# Ryu OFP Host
-RYU_OFP_HOST=${RYU_OFP_HOST:-127.0.0.1}
-# Ryu OFP Port
-RYU_OFP_PORT=${RYU_OFP_PORT:-6633}
-# Ryu Applications
-RYU_APPS=${RYU_APPS:-ryu.app.simple_isolation,ryu.app.rest}
-function configure_ryu {
- :
-}
-
-function init_ryu {
- RYU_CONF_DIR=/etc/ryu
- if [[ ! -d $RYU_CONF_DIR ]]; then
- sudo mkdir -p $RYU_CONF_DIR
- fi
- sudo chown $STACK_USER $RYU_CONF_DIR
- RYU_CONF=$RYU_CONF_DIR/ryu.conf
- sudo rm -rf $RYU_CONF
-
- # Ryu configuration
- RYU_CONF_CONTENTS=${RYU_CONF_CONTENTS:-"[DEFAULT]
-app_lists=$RYU_APPS
-wsapi_host=$RYU_API_HOST
-wsapi_port=$RYU_API_PORT
-ofp_listen_host=$RYU_OFP_HOST
-ofp_tcp_listen_port=$RYU_OFP_PORT
-neutron_url=http://$Q_HOST:$Q_PORT
-neutron_admin_username=$Q_ADMIN_USERNAME
-neutron_admin_password=$SERVICE_PASSWORD
-neutron_admin_tenant_name=$SERVICE_TENANT_NAME
-neutron_admin_auth_url=$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v2.0
-neutron_auth_strategy=$Q_AUTH_STRATEGY
-neutron_controller_addr=tcp:$RYU_OFP_HOST:$RYU_OFP_PORT
-"}
- echo "${RYU_CONF_CONTENTS}" > $RYU_CONF
-}
-
-# install_ryu can be called multiple times as neutron_pluing/ryu may call
-# this function for neutron-ryu-agent
# Make this function idempotent and avoid cloning same repo many times
# with RECLONE=yes
_RYU_INSTALLED=${_RYU_INSTALLED:-False}
@@ -63,17 +22,5 @@
fi
}
-function start_ryu {
- run_process ryu "$RYU_DIR/bin/ryu-manager --config-file $RYU_CONF"
-}
-
-function stop_ryu {
- :
-}
-
-function check_ryu {
- :
-}
-
# Restore xtrace
$RYU3_XTRACE
diff --git a/lib/nova_plugins/hypervisor-libvirt b/lib/nova_plugins/hypervisor-libvirt
index 259bf15..53dbfb9 100644
--- a/lib/nova_plugins/hypervisor-libvirt
+++ b/lib/nova_plugins/hypervisor-libvirt
@@ -42,6 +42,7 @@
iniset $NOVA_CONF libvirt virt_type "$LIBVIRT_TYPE"
iniset $NOVA_CONF libvirt cpu_mode "none"
iniset $NOVA_CONF libvirt use_usb_tablet "False"
+ iniset $NOVA_CONF libvirt live_migration_uri "qemu+ssh://$STACK_USER@%s/system"
iniset $NOVA_CONF DEFAULT default_ephemeral_format "ext4"
iniset $NOVA_CONF DEFAULT compute_driver "libvirt.LibvirtDriver"
LIBVIRT_FIREWALL_DRIVER=${LIBVIRT_FIREWALL_DRIVER:-"nova.virt.libvirt.firewall.IptablesFirewallDriver"}
diff --git a/lib/swift b/lib/swift
index ae0874e..9e61331 100644
--- a/lib/swift
+++ b/lib/swift
@@ -54,8 +54,7 @@
# Set ``SWIFT_CONF_DIR`` to the location of the configuration files.
# Default is ``/etc/swift``.
-# TODO(dtroyer): remove SWIFT_CONFIG_DIR after cutting stable/grizzly
-SWIFT_CONF_DIR=${SWIFT_CONF_DIR:-${SWIFT_CONFIG_DIR:-/etc/swift}}
+SWIFT_CONF_DIR=${SWIFT_CONF_DIR:-/etc/swift}
if is_service_enabled s-proxy && is_service_enabled swift3; then
# If we are using swift3, we can default the s3 port to swift instead
diff --git a/lib/tempest b/lib/tempest
index 8931300..6bcfaec 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -247,6 +247,7 @@
fi
fi
+ iniset $TEMPEST_CONF DEFAULT use_syslog $SYSLOG
# Oslo
iniset $TEMPEST_CONFIG DEFAULT lock_path $TEMPEST_STATE_PATH
mkdir -p $TEMPEST_STATE_PATH
@@ -449,7 +450,7 @@
function install_tempest_lib {
if use_library_from_git "tempest-lib"; then
git_clone_by_name "tempest-lib"
- setup_develop "tempest-lib"
+ setup_dev_lib "tempest-lib"
fi
}
diff --git a/stack.sh b/stack.sh
index 93e4541..24bda01 100755
--- a/stack.sh
+++ b/stack.sh
@@ -575,9 +575,6 @@
done
fi
-# Set the destination directories for other OpenStack projects
-GITDIR["python-openstackclient"]=$DEST/python-openstackclient
-
# Interactive Configuration
# -------------------------
diff --git a/stackrc b/stackrc
index e0e886d..5c5acb1 100644
--- a/stackrc
+++ b/stackrc
@@ -273,6 +273,8 @@
# consolidated openstack python client
GITREPO["python-openstackclient"]=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
GITBRANCH["python-openstackclient"]=${OPENSTACKCLIENT_BRANCH:-master}
+# this doesn't exist in a lib file, so set it here
+GITDIR["python-openstackclient"]=$DEST/python-openstackclient
###################
#
diff --git a/tests/test_libs_from_pypi.sh b/tests/test_libs_from_pypi.sh
new file mode 100755
index 0000000..1b576d8
--- /dev/null
+++ b/tests/test_libs_from_pypi.sh
@@ -0,0 +1,96 @@
+#!/usr/bin/env bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+TOP=$(cd $(dirname "$0")/.. && pwd)
+
+export TOP_DIR=$TOP
+
+# Import common functions
+source $TOP/functions
+source $TOP/stackrc
+source $TOP/lib/tls
+for i in $TOP/lib/*; do
+ if [[ -f $i ]]; then
+ source $i
+ fi
+done
+
+ALL_LIBS="python-novaclient oslo.config pbr oslo.context python-troveclient python-keystoneclient taskflow oslo.middleware pycadf python-glanceclient python-ironicclient tempest-lib oslo.messaging oslo.log cliff python-heatclient stevedore python-cinderclient glance_store oslo.concurrency oslo.db oslo.vmware keystonemiddleware oslo.serialization python-saharaclient django_openstack_auth python-openstackclient oslo.rootwrap oslo.i18n python-ceilometerclient oslo.utils python-swiftclient python-neutronclient"
+
+# Generate the above list with
+# echo ${!GITREPO[@]}
+
+function check_exists {
+ local thing=$1
+ local hash=$2
+ local key=$3
+ if [[ ! -z "$VERBOSE" ]]; then
+ echo "Checking for $hash[$key]"
+ fi
+ if [[ -z $thing ]]; then
+ echo "$hash[$key] does not exit!"
+ exit 1
+ else
+ if [[ ! -z "$VERBOSE" ]]; then
+ echo "$hash[$key] => $thing"
+ fi
+ fi
+}
+
+function test_all_libs_upto_date {
+ # this is all the magics
+ local found_libs=${!GITREPO[@]}
+ declare -A all_libs
+ for lib in $ALL_LIBS; do
+ all_libs[$lib]=1
+ done
+
+ for lib in $found_libs; do
+ if [[ -z ${all_libs[$lib]} ]]; then
+ echo "Library '$lib' not listed in unit tests, please add to ALL_LIBS"
+ exit 1
+ fi
+
+ done
+ echo "test_all_libs_upto_date PASSED"
+}
+
+function test_libs_exist {
+ local lib=""
+ for lib in $ALL_LIBS; do
+ check_exists "${GITREPO[$lib]}" "GITREPO" "$lib"
+ check_exists "${GITBRANCH[$lib]}" "GITBRANCH" "$lib"
+ check_exists "${GITDIR[$lib]}" "GITDIR" "$lib"
+ done
+
+ echo "test_libs_exist PASSED"
+}
+
+function test_branch_master {
+ for lib in $ALL_LIBS; do
+ if [[ ${GITBRANCH[$lib]} != "master" ]]; then
+ echo "GITBRANCH for $lib not master (${GITBRANCH[$lib]})"
+ exit 1
+ fi
+ done
+
+ echo "test_branch_master PASSED"
+}
+
+set -o errexit
+
+test_libs_exist
+test_branch_master
+test_all_libs_upto_date