Merge "Add Nova v2.1 API endpoint"
diff --git a/docs/source/configuration.html b/docs/source/configuration.html
index fbcead7..044bafc 100644
--- a/docs/source/configuration.html
+++ b/docs/source/configuration.html
@@ -146,7 +146,7 @@
<dt>One syslog to bind them all</dt>
<dd><em>Default: <code>SYSLOG=False SYSLOG_HOST=$HOST_IP SYSLOG_PORT=516</code></em><br />
- Logging all services to a single syslog can be convenient. Enable syslogging by seting <code>SYSLOG</code> to <code>True</code>. If the destination log host is not localhost <code>SYSLOG_HOST</code> and <code>SYSLOG_PORT</code> can be used to direct the message stream to the log host.
+ Logging all services to a single syslog can be convenient. Enable syslogging by setting <code>SYSLOG</code> to <code>True</code>. If the destination log host is not localhost <code>SYSLOG_HOST</code> and <code>SYSLOG_PORT</code> can be used to direct the message stream to the log host.
<pre>SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516</pre></dd>
diff --git a/docs/source/guides/multinode-lab.html b/docs/source/guides/multinode-lab.html
index a286954..db8be08 100644
--- a/docs/source/guides/multinode-lab.html
+++ b/docs/source/guides/multinode-lab.html
@@ -114,11 +114,11 @@
<p>OpenStack runs as a non-root user that has sudo access to root. There is nothing special
about the name, we'll use <code>stack</code> here. Every node must use the same name and
preferably uid. If you created a user during the OS install you can use it and give it
- sudo priviledges below. Otherwise create the stack user:</p>
+ sudo privileges below. Otherwise create the stack user:</p>
<pre>groupadd stack
useradd -g stack -s /bin/bash -d /opt/stack -m stack</pre>
<p>This user will be making many changes to your system during installation and operation
- so it needs to have sudo priviledges to root without a password:</p>
+ so it needs to have sudo privileges to root without a password:</p>
<pre>echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers</pre>
<p>From here on use the <code>stack</code> user. <b>Logout</b> and <b>login</b> as the
<code>stack</code> user.</p>
@@ -221,7 +221,7 @@
</div>
<h3>Additional Users</h3>
- <p>DevStack creates two OpenStack users (<code>admin</code> and <code>demo</code>) and two tenants (also <code>admin</code> and <code>demo</code>). <code>admin</code> is exactly what it sounds like, a priveleged administrative account that is a member of both the <code>admin</code> and <code>demo</code> tenants. <code>demo</code> is a normal user account that is only a member of the <code>demo</code> tenant. Creating additional OpenStack users can be done through the dashboard, sometimes it is easier to do them in bulk from a script, especially since they get blown away every time
+ <p>DevStack creates two OpenStack users (<code>admin</code> and <code>demo</code>) and two tenants (also <code>admin</code> and <code>demo</code>). <code>admin</code> is exactly what it sounds like, a privileged administrative account that is a member of both the <code>admin</code> and <code>demo</code> tenants. <code>demo</code> is a normal user account that is only a member of the <code>demo</code> tenant. Creating additional OpenStack users can be done through the dashboard, sometimes it is easier to do them in bulk from a script, especially since they get blown away every time
<code>stack.sh</code> runs. The following steps are ripe for scripting:</p>
<pre># Get admin creds
. openrc admin admin
@@ -275,7 +275,7 @@
vgcreate stack-volumes /dev/sdc</pre>
<h3>Syslog</h3>
- <p>DevStack is capable of using <code>rsyslog</code> to agregate logging across the cluster.
+ <p>DevStack is capable of using <code>rsyslog</code> to aggregate logging across the cluster.
It is off by default; to turn it on set <code>SYSLOG=True</code> in <code>local.conf</code>.
<code>SYSLOG_HOST</code> defaults to <code>HOST_IP</code>; on the compute nodes it
must be set to the IP of the cluster controller to send syslog output there. In the example
diff --git a/docs/source/guides/single-machine.html b/docs/source/guides/single-machine.html
index ca9cafa..9471972 100644
--- a/docs/source/guides/single-machine.html
+++ b/docs/source/guides/single-machine.html
@@ -69,9 +69,9 @@
</div>
<h3>Add your user</h3>
- <p>We need to add a user to install DevStack. (if you created a user during install you can skip this step and just give the user sudo priviledges below)</p>
+ <p>We need to add a user to install DevStack. (if you created a user during install you can skip this step and just give the user sudo privileges below)</p>
<pre>adduser stack</pre>
- <p>Since this user will be making many changes to your system, it will need to have sudo priviledges:</p>
+ <p>Since this user will be making many changes to your system, it will need to have sudo privileges:</p>
<pre>apt-get install sudo -y || yum install -y sudo
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers</pre>
<p>From here on you should use the user you created. <b>Logout</b> and <b>login</b> as that user.</p>
@@ -113,7 +113,7 @@
<h3>Using OpenStack</h3>
<p>At this point you should be able to access the dashboard from other computers on the
local network. In this example that would be http://192.168.1.201/ for the dashboard (aka Horizon).
- Launch VMs and if you give them floating IPs and security group access those VMs will be accessable from other machines on your network.</p>
+ Launch VMs and if you give them floating IPs and security group access those VMs will be accessible from other machines on your network.</p>
<p>Some examples of using the OpenStack command-line clients <code>nova</code> and <code>glance</code>
are in the shakedown scripts in <code>devstack/exercises</code>. <code>exercise.sh</code>
diff --git a/docs/source/openrc.html b/docs/source/openrc.html
index b84d268..da6697f 100644
--- a/docs/source/openrc.html
+++ b/docs/source/openrc.html
@@ -60,7 +60,7 @@
<dd>The introduction of Keystone to the OpenStack ecosystem has standardized the
term <em>tenant</em> as the entity that owns resources. In some places references
still exist to the original Nova term <em>project</em> for this use. Also,
- <em>tenant_name</em> is prefered to <em>tenant_id</em>.
+ <em>tenant_name</em> is preferred to <em>tenant_id</em>.
<pre>OS_TENANT_NAME=demo</pre></dd>
<dt>OS_USERNAME</dt>
diff --git a/files/apache-ceilometer.template b/files/apache-ceilometer.template
new file mode 100644
index 0000000..1c57b32
--- /dev/null
+++ b/files/apache-ceilometer.template
@@ -0,0 +1,15 @@
+Listen %PORT%
+
+<VirtualHost *:%PORT%>
+ WSGIDaemonProcess ceilometer-api processes=2 threads=10 user=%USER% display-name=%{GROUP}
+ WSGIProcessGroup ceilometer-api
+ WSGIScriptAlias / %WSGIAPP%
+ WSGIApplicationGroup %{GLOBAL}
+ <IfVersion >= 2.4>
+ ErrorLogFormat "%{cu}t %M"
+ </IfVersion>
+ ErrorLog /var/log/%APACHE_NAME%/ceilometer.log
+ CustomLog /var/log/%APACHE_NAME%/ceilometer_access.log combined
+</VirtualHost>
+
+WSGISocketPrefix /var/run/%APACHE_NAME%
diff --git a/files/apache-horizon.template b/files/apache-horizon.template
index c1dd693..bca1251 100644
--- a/files/apache-horizon.template
+++ b/files/apache-horizon.template
@@ -17,10 +17,16 @@
<Directory %HORIZON_DIR%/>
Options Indexes FollowSymLinks MultiViews
- %HORIZON_REQUIRE%
AllowOverride None
- Order allow,deny
- allow from all
+ # Apache 2.4 uses mod_authz_host for access control now (instead of
+ # "Allow")
+ <IfVersion < 2.4>
+ Order allow,deny
+ Allow from all
+ </IfVersion>
+ <IfVersion >= 2.4>
+ Require all granted
+ </IfVersion>
</Directory>
ErrorLog /var/log/%APACHE_NAME%/horizon_error.log
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index 1bdb84c..88492d3 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -6,9 +6,14 @@
WSGIProcessGroup keystone-public
WSGIScriptAlias / %PUBLICWSGI%
WSGIApplicationGroup %{GLOBAL}
- %ERRORLOGFORMAT%
+ <IfVersion >= 2.4>
+ ErrorLogFormat "%{cu}t %M"
+ </IfVersion>
ErrorLog /var/log/%APACHE_NAME%/keystone.log
CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
+ %SSLENGINE%
+ %SSLCERTFILE%
+ %SSLKEYFILE%
</VirtualHost>
<VirtualHost *:%ADMINPORT%>
@@ -16,9 +21,14 @@
WSGIProcessGroup keystone-admin
WSGIScriptAlias / %ADMINWSGI%
WSGIApplicationGroup %{GLOBAL}
- %ERRORLOGFORMAT%
+ <IfVersion >= 2.4>
+ ErrorLogFormat "%{cu}t %M"
+ </IfVersion>
ErrorLog /var/log/%APACHE_NAME%/keystone.log
CustomLog /var/log/%APACHE_NAME%/keystone_access.log combined
+ %SSLENGINE%
+ %SSLCERTFILE%
+ %SSLKEYFILE%
</VirtualHost>
# Workaround for missing path on RHEL6, see
diff --git a/files/apts/horizon b/files/apts/horizon
index 8969046..03df3cb 100644
--- a/files/apts/horizon
+++ b/files/apts/horizon
@@ -9,13 +9,11 @@
python-xattr
python-sqlalchemy
python-webob
-python-kombu
pylint
python-eventlet
python-nose
python-sphinx
python-mox
-python-kombu
python-coverage
python-cherrypy3 # why?
python-migrate
diff --git a/files/apts/ironic b/files/apts/ironic
index 283d1b2..45fdecc 100644
--- a/files/apts/ironic
+++ b/files/apts/ironic
@@ -9,6 +9,9 @@
openvswitch-datapath-dkms
python-libguestfs
python-libvirt
+qemu
+qemu-kvm
+qemu-utils
syslinux
tftpd-hpa
xinetd
diff --git a/files/apts/neutron b/files/apts/neutron
index 381c758..a48a800 100644
--- a/files/apts/neutron
+++ b/files/apts/neutron
@@ -11,7 +11,6 @@
python-suds
python-pastedeploy
python-greenlet
-python-kombu
python-eventlet
python-sqlalchemy
python-mysqldb
diff --git a/files/apts/nova b/files/apts/nova
index b1b969a..66f29c4 100644
--- a/files/apts/nova
+++ b/files/apts/nova
@@ -18,6 +18,7 @@
qemu-kvm # NOPRIME
qemu # dist:wheezy,jessie NOPRIME
libvirt-bin # NOPRIME
+libvirt-dev # NOPRIME
pm-utils
libjs-jquery-tablesorter # Needed for coverage html reports
vlan
@@ -42,7 +43,6 @@
python-suds
python-lockfile
python-m2crypto
-python-kombu
python-feedparser
python-iso8601
python-qpid # NOPRIME
diff --git a/files/apts/zaqar-server b/files/apts/zaqar-server
index bc7ef22..32b1017 100644
--- a/files/apts/zaqar-server
+++ b/files/apts/zaqar-server
@@ -1,3 +1,5 @@
python-pymongo
mongodb-server
pkg-config
+redis-server # NOPRIME
+python-redis # NOPRIME
\ No newline at end of file
diff --git a/files/rpms-suse/horizon b/files/rpms-suse/horizon
index d3bde26..fa7e439 100644
--- a/files/rpms-suse/horizon
+++ b/files/rpms-suse/horizon
@@ -12,7 +12,6 @@
python-coverage
python-dateutil
python-eventlet
-python-kombu
python-mox
python-nose
python-pylint
diff --git a/files/rpms-suse/keystone b/files/rpms-suse/keystone
index a734cb9..4c37ade 100644
--- a/files/rpms-suse/keystone
+++ b/files/rpms-suse/keystone
@@ -10,6 +10,6 @@
python-greenlet
python-lxml
python-mysql
-python-mysql.connector
+python-mysql-connector-python
python-pysqlite
sqlite3
diff --git a/files/rpms-suse/neutron b/files/rpms-suse/neutron
index 8ad69b0..8431bd1 100644
--- a/files/rpms-suse/neutron
+++ b/files/rpms-suse/neutron
@@ -7,9 +7,8 @@
python-eventlet
python-greenlet
python-iso8601
-python-kombu
python-mysql
-python-mysql.connector
+python-mysql-connector-python
python-Paste
python-PasteDeploy
python-pyudev
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index 73c0604..b1c4f6a 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -28,13 +28,12 @@
python-feedparser
python-greenlet
python-iso8601
-python-kombu
python-libxml2
python-lockfile
python-lxml # needed for glance which is needed for nova --- this shouldn't be here
python-mox
python-mysql
-python-mysql.connector
+python-mysql-connector-python
python-numpy # needed by websockify for spice console
python-paramiko
python-sqlalchemy-migrate
diff --git a/files/rpms/horizon b/files/rpms/horizon
index 8ecb030..fe3a2f4 100644
--- a/files/rpms/horizon
+++ b/files/rpms/horizon
@@ -9,7 +9,6 @@
python-eventlet
python-greenlet
python-httplib2
-python-kombu
python-migrate
python-mox
python-nose
diff --git a/files/rpms/keystone b/files/rpms/keystone
index e1873b7..ce41ee5 100644
--- a/files/rpms/keystone
+++ b/files/rpms/keystone
@@ -9,5 +9,6 @@
python-sqlalchemy
python-webob
sqlite
+mod_ssl
# Deps installed via pip for RHEL
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 7020d33..2c9dd3d 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -11,7 +11,6 @@
python-eventlet
python-greenlet
python-iso8601
-python-kombu
#rhel6 gets via pip
python-paste # dist:f19,f20,rhel7
python-paste-deploy # dist:f19,f20,rhel7
diff --git a/files/rpms/nova b/files/rpms/nova
index 695d814..f3261c6 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -11,6 +11,7 @@
kpartx
kvm # NOPRIME
libvirt-bin # NOPRIME
+libvirt-devel # NOPRIME
libvirt-python # NOPRIME
libxml2-python
numpy # needed by websockify for spice console
@@ -25,7 +26,6 @@
python-feedparser
python-greenlet
python-iso8601
-python-kombu
python-lockfile
python-migrate
python-mox
diff --git a/files/rpms/zaqar-server b/files/rpms/zaqar-server
index d7b7ea8..69e8bfa 100644
--- a/files/rpms/zaqar-server
+++ b/files/rpms/zaqar-server
@@ -1,3 +1,5 @@
selinux-policy-targeted
mongodb-server
pymongo
+redis # NOPRIME
+python-redis # NOPRIME
diff --git a/functions b/functions
index 76f7047..bbde27d 100644
--- a/functions
+++ b/functions
@@ -21,29 +21,6 @@
declare -f -F $1 > /dev/null
}
-# Checks if installed Apache is <= given version
-# $1 = x.y.z (version string of Apache)
-function check_apache_version {
- local cmd="apachectl"
- if ! [[ -x $(which apachectl 2>/dev/null) ]]; then
- cmd="/usr/sbin/apachectl"
- fi
-
- local version=$($cmd -v | grep version | grep -Po 'Apache/\K[^ ]*')
- expr "$version" '>=' $1 > /dev/null
-}
-
-
-# Cleanup anything from /tmp on unstack
-# clean_tmp
-function cleanup_tmp {
- local tmp_dir=${TMPDIR:-/tmp}
-
- # see comments in pip_install
- sudo rm -rf ${tmp_dir}/pip-build.*
-}
-
-
# Retrieve an image from a URL and upload into Glance.
# Uses the following variables:
#
@@ -85,7 +62,7 @@
# OpenVZ-format images are provided as .tar.gz, but not decompressed prior to loading
if [[ "$image_url" =~ 'openvz' ]]; then
image_name="${image_fname%.tar.gz}"
- openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name" --public --container-format ami --disk-format ami < "${image}"
+ openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name" --public --container-format ami --disk-format ami < "${image}"
return
fi
@@ -196,7 +173,7 @@
vmdk_adapter_type="${props[1]:-$vmdk_adapter_type}"
vmdk_net_adapter="${props[2]:-$vmdk_net_adapter}"
- openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name" --public --container-format bare --disk-format vmdk --property vmware_disktype="$vmdk_disktype" --property vmware_adaptertype="$vmdk_adapter_type" --property hw_vif_model="$vmdk_net_adapter" < "${image}"
+ openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name" --public --container-format bare --disk-format vmdk --property vmware_disktype="$vmdk_disktype" --property vmware_adaptertype="$vmdk_adapter_type" --property hw_vif_model="$vmdk_net_adapter" < "${image}"
return
fi
@@ -214,7 +191,7 @@
fi
openstack \
--os-token $token \
- --os-url http://$GLANCE_HOSTPORT \
+ --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT \
image create \
"$image_name" --public \
--container-format=ovf --disk-format=vhd \
@@ -229,7 +206,7 @@
image_name="${image_fname%.xen-raw.tgz}"
openstack \
--os-token $token \
- --os-url http://$GLANCE_HOSTPORT \
+ --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT \
image create \
"$image_name" --public \
--container-format=tgz --disk-format=raw \
@@ -307,9 +284,9 @@
if [ "$container_format" = "bare" ]; then
if [ "$unpack" = "zcat" ]; then
- openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name" $img_property --public --container-format=$container_format --disk-format $disk_format < <(zcat --force "${image}")
+ openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name" $img_property --public --container-format=$container_format --disk-format $disk_format < <(zcat --force "${image}")
else
- openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name" $img_property --public --container-format=$container_format --disk-format $disk_format < "${image}"
+ openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name" $img_property --public --container-format=$container_format --disk-format $disk_format < "${image}"
fi
else
# Use glance client to add the kernel the root filesystem.
@@ -317,12 +294,12 @@
# kernel for use when uploading the root filesystem.
local kernel_id="" ramdisk_id="";
if [ -n "$kernel" ]; then
- kernel_id=$(openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name-kernel" $img_property --public --container-format aki --disk-format aki < "$kernel" | grep ' id ' | get_field 2)
+ kernel_id=$(openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name-kernel" $img_property --public --container-format aki --disk-format aki < "$kernel" | grep ' id ' | get_field 2)
fi
if [ -n "$ramdisk" ]; then
- ramdisk_id=$(openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "$image_name-ramdisk" $img_property --public --container-format ari --disk-format ari < "$ramdisk" | grep ' id ' | get_field 2)
+ ramdisk_id=$(openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "$image_name-ramdisk" $img_property --public --container-format ari --disk-format ari < "$ramdisk" | grep ' id ' | get_field 2)
fi
- openstack --os-token $token --os-url http://$GLANCE_HOSTPORT image create "${image_name%.img}" $img_property --public --container-format ami --disk-format ami ${kernel_id:+--property kernel_id=$kernel_id} ${ramdisk_id:+--property ramdisk_id=$ramdisk_id} < "${image}"
+ openstack --os-token $token --os-url $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT image create "${image_name%.img}" $img_property --public --container-format ami --disk-format ami ${kernel_id:+--property kernel_id=$kernel_id} ${ramdisk_id:+--property ramdisk_id=$ramdisk_id} < "${image}"
fi
}
@@ -351,7 +328,7 @@
function wait_for_service {
local timeout=$1
local url=$2
- timeout $timeout sh -c "while ! curl --noproxy '*' -s $url >/dev/null; do sleep 1; done"
+ timeout $timeout sh -c "while ! curl -k --noproxy '*' -s $url >/dev/null; do sleep 1; done"
}
diff --git a/functions-common b/functions-common
index 6b1f473..c597651 100644
--- a/functions-common
+++ b/functions-common
@@ -36,6 +36,11 @@
XTRACE=$(set +o | grep xtrace)
set +o xtrace
+# Global Config Variables
+declare -A GITREPO
+declare -A GITBRANCH
+declare -A GITDIR
+
# Config Functions
# ================
@@ -598,6 +603,18 @@
cd $orig_dir
}
+# A variation on git clone that lets us specify a project by it's
+# actual name, like oslo.config. This is exceptionally useful in the
+# library installation case
+function git_clone_by_name {
+ local name=$1
+ local repo=${GITREPO[$name]}
+ local dir=${GITDIR[$name]}
+ local branch=${GITBRANCH[$name]}
+ git_clone $repo $dir $branch
+}
+
+
# git can sometimes get itself infinitely stuck with transient network
# errors or other issues with the remote end. This wraps git in a
# timeout/retry loop and is intended to watch over non-local git
@@ -1509,23 +1526,13 @@
pip_mirror_opt="--use-mirrors"
fi
- # pip < 1.4 has a bug where it will use an already existing build
- # directory unconditionally. Say an earlier component installs
- # foo v1.1; pip will have built foo's source in
- # /tmp/$USER-pip-build. Even if a later component specifies foo <
- # 1.1, the existing extracted build will be used and cause
- # confusing errors. By creating unique build directories we avoid
- # this problem. See https://github.com/pypa/pip/issues/709
- local pip_build_tmp=$(mktemp --tmpdir -d pip-build.XXXXX)
-
$xtrace
$sudo_pip PIP_DOWNLOAD_CACHE=${PIP_DOWNLOAD_CACHE:-/var/cache/pip} \
http_proxy=$http_proxy \
https_proxy=$https_proxy \
no_proxy=$no_proxy \
- $cmd_pip install --build=${pip_build_tmp} \
- $pip_mirror_opt $@ \
- && $sudo_pip rm -rf ${pip_build_tmp}
+ $cmd_pip install \
+ $pip_mirror_opt $@
if [[ "$INSTALL_TESTONLY_PACKAGES" == "True" ]]; then
local test_req="$@/test-requirements.txt"
@@ -1534,13 +1541,32 @@
http_proxy=$http_proxy \
https_proxy=$https_proxy \
no_proxy=$no_proxy \
- $cmd_pip install --build=${pip_build_tmp} \
- $pip_mirror_opt -r $test_req \
- && $sudo_pip rm -rf ${pip_build_tmp}
+ $cmd_pip install \
+ $pip_mirror_opt -r $test_req
fi
fi
}
+# should we use this library from their git repo, or should we let it
+# get pulled in via pip dependencies.
+function use_library_from_git {
+ local name=$1
+ local enabled=1
+ [[ ,${LIBS_FROM_GIT}, =~ ,${name}, ]] && enabled=0
+ return $enabled
+}
+
+# setup a library by name. If we are trying to use the library from
+# git, we'll do a git based install, otherwise we'll punt and the
+# library should be installed by a requirements pull from another
+# project.
+function setup_lib {
+ local name=$1
+ local dir=${GITDIR[$name]}
+ setup_install $dir
+}
+
+
# this should be used if you want to install globally, all libraries should
# use this, especially *oslo* ones
function setup_install {
diff --git a/lib/apache b/lib/apache
index 6d22290..2c43681 100644
--- a/lib/apache
+++ b/lib/apache
@@ -59,6 +59,11 @@
else
exit_distro_not_supported "apache installation"
fi
+
+ # ensure mod_version enabled for <IfVersion ...>. This is
+ # built-in statically on anything recent, but precise (2.2)
+ # doesn't have it enabled
+ sudo a2enmod version || true
}
# get_apache_version() - return the version of Apache installed
diff --git a/lib/ceilometer b/lib/ceilometer
index 00fc0d3..9bb3121 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -41,6 +41,7 @@
CEILOMETER_CONF=$CEILOMETER_CONF_DIR/ceilometer.conf
CEILOMETER_API_LOG_DIR=/var/log/ceilometer-api
CEILOMETER_AUTH_CACHE_DIR=${CEILOMETER_AUTH_CACHE_DIR:-/var/cache/ceilometer}
+CEILOMETER_WSGI_DIR=${CEILOMETER_WSGI_DIR:-/var/www/ceilometer}
# Support potential entry-points console scripts
CEILOMETER_BIN_DIR=$(get_python_exec_prefix)
@@ -52,6 +53,7 @@
CEILOMETER_SERVICE_PROTOCOL=http
CEILOMETER_SERVICE_HOST=$SERVICE_HOST
CEILOMETER_SERVICE_PORT=${CEILOMETER_SERVICE_PORT:-8777}
+CEILOMETER_USE_MOD_WSGI=$(trueorfalse False $CEILOMETER_USE_MOD_WSGI)
# To enable OSprofiler change value of this variable to "notifications,profiler"
CEILOMETER_NOTIFICATION_TOPICS=${CEILOMETER_NOTIFICATION_TOPICS:-notifications}
@@ -105,12 +107,39 @@
}
+# _cleanup_keystone_apache_wsgi() - Remove wsgi files, disable and remove apache vhost file
+function _cleanup_ceilometer_apache_wsgi {
+ sudo rm -f $CEILOMETER_WSGI_DIR/*
+ sudo rm -f $(apache_site_config_for ceilometer)
+}
+
# cleanup_ceilometer() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_ceilometer {
if [ "$CEILOMETER_BACKEND" != 'mysql' ] && [ "$CEILOMETER_BACKEND" != 'postgresql' ] ; then
mongo ceilometer --eval "db.dropDatabase();"
fi
+ if [ "$CEILOMETER_USE_MOD_WSGI" == "True" ]; then
+ _cleanup_ceilometer_apache_wsgi
+ fi
+}
+
+function _config_ceilometer_apache_wsgi {
+ sudo mkdir -p $CEILOMETER_WSGI_DIR
+
+ local ceilometer_apache_conf=$(apache_site_config_for ceilometer)
+ local apache_version=$(get_apache_version)
+
+ # copy proxy vhost and wsgi file
+ sudo cp $CEILOMETER_DIR/ceilometer/api/app.wsgi $CEILOMETER_WSGI_DIR/app
+
+ sudo cp $FILES/apache-ceilometer.template $ceilometer_apache_conf
+ sudo sed -e "
+ s|%PORT%|$CEILOMETER_SERVICE_PORT|g;
+ s|%APACHE_NAME%|$APACHE_NAME|g;
+ s|%WSGIAPP%|$CEILOMETER_WSGI_DIR/app|g;
+ s|%USER%|$STACK_USER|g
+ " -i $ceilometer_apache_conf
}
# configure_ceilometer() - Set config files, create data dirs, etc
@@ -163,6 +192,11 @@
iniset $CEILOMETER_CONF vmware host_username "$VMWAREAPI_USER"
iniset $CEILOMETER_CONF vmware host_password "$VMWAREAPI_PASSWORD"
fi
+
+ if [ "$CEILOMETER_USE_MOD_WSGI" == "True" ]; then
+ iniset $CEILOMETER_CONF api pecan_debug "False"
+ _config_ceilometer_apache_wsgi
+ fi
}
function configure_mongodb {
@@ -223,7 +257,16 @@
run_process ceilometer-acentral "ceilometer-agent-central --config-file $CEILOMETER_CONF"
run_process ceilometer-anotification "ceilometer-agent-notification --config-file $CEILOMETER_CONF"
run_process ceilometer-collector "ceilometer-collector --config-file $CEILOMETER_CONF"
- run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+
+ if [[ "$CEILOMETER_USE_MOD_WSGI" == "False" ]]; then
+ run_process ceilometer-api "ceilometer-api -d -v --log-dir=$CEILOMETER_API_LOG_DIR --config-file $CEILOMETER_CONF"
+ else
+ enable_apache_site ceilometer
+ restart_apache_server
+ tail_log ceilometer /var/log/$APACHE_NAME/ceilometer.log
+ tail_log ceilometer-api /var/log/$APACHE_NAME/ceilometer_access.log
+ fi
+
# Start the compute agent last to allow time for the collector to
# fully wake up and connect to the message bus. See bug #1355809
@@ -248,6 +291,10 @@
# stop_ceilometer() - Stop running processes
function stop_ceilometer {
+ if [ "$CEILOMETER_USE_MOD_WSGI" == "True" ]; then
+ disable_apache_site ceilometer
+ restart_apache_server
+ fi
# Kill the ceilometer screen windows
for serv in ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector ceilometer-api ceilometer-alarm-notifier ceilometer-alarm-evaluator; do
stop_process $serv
diff --git a/lib/cinder b/lib/cinder
index cbca9c0..b30a036 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -46,6 +46,9 @@
CINDER_API_PASTE_INI=$CINDER_CONF_DIR/api-paste.ini
# Public facing bits
+if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ CINDER_SERVICE_PROTOCOL="https"
+fi
CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
CINDER_SERVICE_PORT_INT=${CINDER_SERVICE_PORT_INT:-18776}
@@ -299,6 +302,20 @@
fi
iniset $CINDER_CONF DEFAULT osapi_volume_workers "$API_WORKERS"
+
+ iniset $CINDER_CONF DEFAULT glance_api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
+ if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ iniset $CINDER_CONF DEFAULT glance_protocol https
+ fi
+
+ # Register SSL certificates if provided
+ if is_ssl_enabled_service cinder; then
+ ensure_certificates CINDER
+
+ iniset $CINDER_CONF DEFAULT ssl_cert_file "$CINDER_SSL_CERT"
+ iniset $CINDER_CONF DEFAULT ssl_key_file "$CINDER_SSL_KEY"
+ fi
+
}
# create_cinder_accounts() - Set up common required cinder accounts
@@ -399,6 +416,12 @@
# start_cinder() - Start running processes, including screen
function start_cinder {
+ local service_port=$CINDER_SERVICE_PORT
+ local service_protocol=$CINDER_SERVICE_PROTOCOL
+ if is_service_enabled tls-proxy; then
+ service_port=$CINDER_SERVICE_PORT_INT
+ service_protocol="http"
+ fi
if is_service_enabled c-vol; then
# Delete any old stack.conf
sudo rm -f /etc/tgt/conf.d/stack.conf
@@ -425,7 +448,7 @@
run_process c-api "$CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
echo "Waiting for Cinder API to start..."
- if ! wait_for_service $SERVICE_TIMEOUT $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT; then
+ if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$CINDER_SERVICE_HOST:$service_port; then
die $LINENO "c-api did not start"
fi
diff --git a/lib/dib b/lib/dib
index 3a1167f..d39d801 100644
--- a/lib/dib
+++ b/lib/dib
@@ -32,10 +32,7 @@
# install_dib() - Collect source and prepare
function install_dib {
- git_clone $DIB_REPO $DIB_DIR $DIB_BRANCH
- pushd $DIB_DIR
- pip_install ./
- popd
+ pip_install diskimage-builder
git_clone $TIE_REPO $TIE_DIR $TIE_BRANCH
git_clone $OCC_REPO $OCC_DIR $OCC_BRANCH
diff --git a/lib/glance b/lib/glance
index 6ca2fb5..4194842 100644
--- a/lib/glance
+++ b/lib/glance
@@ -51,8 +51,18 @@
GLANCE_BIN_DIR=$(get_python_exec_prefix)
fi
+if is_ssl_enabled_service "glance" || is_service_enabled tls-proxy; then
+ GLANCE_SERVICE_PROTOCOL="https"
+fi
+
# Glance connection info. Note the port must be specified.
-GLANCE_HOSTPORT=${GLANCE_HOSTPORT:-$SERVICE_HOST:9292}
+GLANCE_SERVICE_HOST=${GLANCE_SERVICE_HOST:-$SERVICE_HOST}
+GLANCE_SERVICE_PORT=${GLANCE_SERVICE_PORT:-9292}
+GLANCE_SERVICE_PORT_INT=${GLANCE_SERVICE_PORT_INT:-19292}
+GLANCE_HOSTPORT=${GLANCE_HOSTPORT:-$GLANCE_SERVICE_HOST:$GLANCE_SERVICE_PORT}
+GLANCE_SERVICE_PROTOCOL=${GLANCE_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
+GLANCE_REGISTRY_PORT=${GLANCE_REGISTRY_PORT:-9191}
+GLANCE_REGISTRY_PORT_INT=${GLANCE_REGISTRY_PORT_INT:-19191}
# Tell Tempest this project is present
TEMPEST_SERVICES+=,glance
@@ -148,6 +158,26 @@
iniset $GLANCE_API_CONF glance_store stores "file, http, swift"
fi
+ if is_service_enabled tls-proxy; then
+ iniset $GLANCE_API_CONF DEFAULT bind_port $GLANCE_SERVICE_PORT_INT
+ iniset $GLANCE_REGISTRY_CONF DEFAULT bind_port $GLANCE_REGISTRY_PORT_INT
+ fi
+
+ # Register SSL certificates if provided
+ if is_ssl_enabled_service glance; then
+ ensure_certificates GLANCE
+
+ iniset $GLANCE_API_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
+ iniset $GLANCE_API_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
+
+ iniset $GLANCE_REGISTRY_CONF DEFAULT cert_file "$GLANCE_SSL_CERT"
+ iniset $GLANCE_REGISTRY_CONF DEFAULT key_file "$GLANCE_SSL_KEY"
+ fi
+
+ if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ iniset $GLANCE_API_CONF DEFAULT registry_client_protocol https
+ fi
+
cp -p $GLANCE_DIR/etc/glance-registry-paste.ini $GLANCE_REGISTRY_PASTE_INI
cp -p $GLANCE_DIR/etc/glance-api-paste.ini $GLANCE_API_PASTE_INI
@@ -176,6 +206,14 @@
cp -p $GLANCE_DIR/etc/schema-image.json $GLANCE_SCHEMA_JSON
cp -p $GLANCE_DIR/etc/metadefs/*.json $GLANCE_METADEF_DIR
+
+ if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
+ CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
+
+ iniset $GLANCE_API_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
+ iniset $GLANCE_CACHE_CONF DEFAULT cinder_endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
+ fi
}
# create_glance_accounts() - Set up common required glance accounts
@@ -206,9 +244,9 @@
"image" "Glance Image Service")
get_or_create_endpoint $glance_service \
"$REGION_NAME" \
- "http://$GLANCE_HOSTPORT" \
- "http://$GLANCE_HOSTPORT" \
- "http://$GLANCE_HOSTPORT"
+ "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT" \
+ "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT" \
+ "$GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT"
fi
fi
}
@@ -265,10 +303,17 @@
# start_glance() - Start running processes, including screen
function start_glance {
+ local service_protocol=$GLANCE_SERVICE_PROTOCOL
+ if is_service_enabled tls-proxy; then
+ start_tls_proxy '*' $GLANCE_SERVICE_PORT $GLANCE_SERVICE_HOST $GLANCE_SERVICE_PORT_INT &
+ start_tls_proxy '*' $GLANCE_REGISTRY_PORT $GLANCE_SERVICE_HOST $GLANCE_REGISTRY_PORT_INT &
+ fi
+
run_process g-reg "$GLANCE_BIN_DIR/glance-registry --config-file=$GLANCE_CONF_DIR/glance-registry.conf"
run_process g-api "$GLANCE_BIN_DIR/glance-api --config-file=$GLANCE_CONF_DIR/glance-api.conf"
+
echo "Waiting for g-api ($GLANCE_HOSTPORT) to start..."
- if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$GLANCE_HOSTPORT; do sleep 1; done"; then
+ if ! wait_for_service $SERVICE_TIMEOUT $GLANCE_SERVICE_PROTOCOL://$GLANCE_HOSTPORT; then
die $LINENO "g-api did not start"
fi
}
diff --git a/lib/heat b/lib/heat
index f64cc90..737598d 100644
--- a/lib/heat
+++ b/lib/heat
@@ -40,6 +40,8 @@
HEAT_ENV_DIR=$HEAT_CONF_DIR/environment.d
HEAT_TEMPLATES_DIR=$HEAT_CONF_DIR/templates
HEAT_STACK_DOMAIN=`trueorfalse True $HEAT_STACK_DOMAIN`
+HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP}
+HEAT_API_PORT=${HEAT_API_PORT:-8004}
# other default options
HEAT_DEFERRED_AUTH=${HEAT_DEFERRED_AUTH:-trusts}
@@ -69,6 +71,9 @@
# configure_heat() - Set config files, create data dirs, etc
function configure_heat {
setup_develop $HEAT_DIR
+ if [[ "$HEAT_STANDALONE" = "True" ]]; then
+ setup_develop $HEAT_DIR/contrib/heat_keystoneclient_v2
+ fi
if [[ ! -d $HEAT_CONF_DIR ]]; then
sudo mkdir -p $HEAT_CONF_DIR
@@ -83,8 +88,6 @@
HEAT_ENGINE_PORT=${HEAT_ENGINE_PORT:-8001}
HEAT_API_CW_HOST=${HEAT_API_CW_HOST:-$HOST_IP}
HEAT_API_CW_PORT=${HEAT_API_CW_PORT:-8003}
- HEAT_API_HOST=${HEAT_API_HOST:-$HOST_IP}
- HEAT_API_PORT=${HEAT_API_PORT:-8004}
HEAT_API_PASTE_FILE=$HEAT_CONF_DIR/api-paste.ini
HEAT_POLICY_FILE=$HEAT_CONF_DIR/policy.json
@@ -113,14 +116,18 @@
configure_auth_token_middleware $HEAT_CONF heat $HEAT_AUTH_CACHE_DIR
if is_ssl_enabled_service "key"; then
- iniset $HEAT_CONF clients_keystone ca_file $KEYSTONE_SSL_CA
+ iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
fi
# ec2authtoken
iniset $HEAT_CONF ec2authtoken auth_uri $KEYSTONE_SERVICE_URI/v2.0
# paste_deploy
- [[ "$HEAT_STANDALONE" = "True" ]] && iniset $HEAT_CONF paste_deploy flavor standalone
+ if [[ "$HEAT_STANDALONE" = "True" ]]; then
+ iniset $HEAT_CONF paste_deploy flavor standalone
+ iniset $HEAT_CONF DEFAULT keystone_backend heat_keystoneclient_v2.client.KeystoneClientV2
+ iniset $HEAT_CONF clients_heat url "http://$HEAT_API_HOST:$HEAT_API_PORT/v1/%(tenant_id)s"
+ fi
# OpenStack API
iniset $HEAT_CONF heat_api bind_port $HEAT_API_PORT
@@ -131,6 +138,18 @@
# Cloudwatch API
iniset $HEAT_CONF heat_api_cloudwatch bind_port $HEAT_API_CW_PORT
+ if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
+ iniset $HEAT_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
+ fi
+
+ if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
+ iniset $HEAT_CONF clients_nova ca_file $SSL_BUNDLE_FILE
+ fi
+
+ if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ iniset $HEAT_CONF clients_cinder ca_file $SSL_BUNDLE_FILE
+ fi
+
# heat environment
sudo mkdir -p $HEAT_ENV_DIR
sudo chown $STACK_USER $HEAT_ENV_DIR
diff --git a/lib/horizon b/lib/horizon
index c0c3f82..755be18 100644
--- a/lib/horizon
+++ b/lib/horizon
@@ -92,24 +92,6 @@
local local_settings=$HORIZON_DIR/openstack_dashboard/local/local_settings.py
cp $HORIZON_SETTINGS $local_settings
- if is_service_enabled neutron; then
- _horizon_config_set $local_settings OPENSTACK_NEUTRON_NETWORK enable_security_group $Q_USE_SECGROUP
- fi
- # enable loadbalancer dashboard in case service is enabled
- if is_service_enabled q-lbaas; then
- _horizon_config_set $local_settings OPENSTACK_NEUTRON_NETWORK enable_lb True
- fi
-
- # enable firewall dashboard in case service is enabled
- if is_service_enabled q-fwaas; then
- _horizon_config_set $local_settings OPENSTACK_NEUTRON_NETWORK enable_firewall True
- fi
-
- # enable VPN dashboard in case service is enabled
- if is_service_enabled q-vpn; then
- _horizon_config_set $local_settings OPENSTACK_NEUTRON_NETWORK enable_vpn True
- fi
-
_horizon_config_set $local_settings "" OPENSTACK_HOST \"${KEYSTONE_SERVICE_HOST}\"
_horizon_config_set $local_settings "" OPENSTACK_KEYSTONE_URL "\"${KEYSTONE_SERVICE_PROTOCOL}://${KEYSTONE_SERVICE_HOST}:${KEYSTONE_SERVICE_PORT}/v2.0\""
if [[ -n "$KEYSTONE_TOKEN_HASH_ALGORITHM" ]]; then
@@ -123,12 +105,6 @@
# Create an empty directory that apache uses as docroot
sudo mkdir -p $HORIZON_DIR/.blackhole
- # Apache 2.4 uses mod_authz_host for access control now (instead of "Allow")
- local horizon_require=''
- if check_apache_version "2.4" ; then
- horizon_require='Require all granted'
- fi
-
local horizon_conf=$(apache_site_config_for horizon)
# Configure apache to run horizon
@@ -138,7 +114,6 @@
s,%HORIZON_DIR%,$HORIZON_DIR,g;
s,%APACHE_NAME%,$APACHE_NAME,g;
s,%DEST%,$DEST,g;
- s,%HORIZON_REQUIRE%,$horizon_require,g;
\" $FILES/apache-horizon.template >$horizon_conf"
if is_ubuntu; then
diff --git a/lib/infra b/lib/infra
index e18c66e..cd439e7 100644
--- a/lib/infra
+++ b/lib/infra
@@ -19,7 +19,7 @@
# Defaults
# --------
-PBR_DIR=$DEST/pbr
+GITDIR["pbr"]=$DEST/pbr
REQUIREMENTS_DIR=$DEST/requirements
# Entry Points
@@ -31,8 +31,12 @@
git_clone $REQUIREMENTS_REPO $REQUIREMENTS_DIR $REQUIREMENTS_BRANCH
# Install pbr
- git_clone $PBR_REPO $PBR_DIR $PBR_BRANCH
- setup_install $PBR_DIR
+ if use_library_from_git "pbr"; then
+ git_clone_by_name "pbr"
+ setup_lib "pbr"
+ else
+ pip_install "pbr"
+ fi
}
# Restore xtrace
diff --git a/lib/keystone b/lib/keystone
index 06f6735..1c67835 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -38,7 +38,11 @@
KEYSTONE_CONF=$KEYSTONE_CONF_DIR/keystone.conf
KEYSTONE_PASTE_INI=${KEYSTONE_PASTE_INI:-$KEYSTONE_CONF_DIR/keystone-paste.ini}
KEYSTONE_AUTH_CACHE_DIR=${KEYSTONE_AUTH_CACHE_DIR:-/var/cache/keystone}
-KEYSTONE_WSGI_DIR=${KEYSTONE_WSGI_DIR:-/var/www/keystone}
+if is_suse; then
+ KEYSTONE_WSGI_DIR=${KEYSTONE_WSGI_DIR:-/srv/www/htdocs/keystone}
+else
+ KEYSTONE_WSGI_DIR=${KEYSTONE_WSGI_DIR:-/var/www/keystone}
+fi
KEYSTONEMIDDLEWARE_DIR=$DEST/keystonemiddleware
KEYSTONECLIENT_DIR=$DEST/python-keystoneclient
@@ -91,7 +95,7 @@
KEYSTONE_VALID_ASSIGNMENT_BACKENDS=kvs,ldap,sql
# if we are running with SSL use https protocols
-if is_ssl_enabled_service "key"; then
+if is_ssl_enabled_service "key" || is_service_enabled tls-proxy; then
KEYSTONE_AUTH_PROTOCOL="https"
KEYSTONE_SERVICE_PROTOCOL="https"
fi
@@ -119,12 +123,20 @@
sudo mkdir -p $KEYSTONE_WSGI_DIR
local keystone_apache_conf=$(apache_site_config_for keystone)
- local apache_version=$(get_apache_version)
+ local keystone_ssl=""
+ local keystone_certfile=""
+ local keystone_keyfile=""
+ local keystone_service_port=$KEYSTONE_SERVICE_PORT
+ local keystone_auth_port=$KEYSTONE_AUTH_PORT
- if [[ ${apache_version#*\.} -ge 4 ]]; then
- # Apache 2.4 supports custom error log formats
- # this should mirror the original log formatting.
- local errorlogformat='ErrorLogFormat "%{cu}t %M"'
+ if is_ssl_enabled_service key; then
+ keystone_ssl="SSLEngine On"
+ keystone_certfile="SSLCertificateFile $KEYSTONE_SSL_CERT"
+ keystone_keyfile="SSLCertificateKeyFile $KEYSTONE_SSL_KEY"
+ fi
+ if is_service_enabled tls-proxy; then
+ keystone_service_port=$KEYSTONE_SERVICE_PORT_INT
+ keystone_auth_port=$KEYSTONE_AUTH_PORT_INT
fi
# copy proxy vhost and wsgi file
@@ -133,13 +145,15 @@
sudo cp $FILES/apache-keystone.template $keystone_apache_conf
sudo sed -e "
- s|%PUBLICPORT%|$KEYSTONE_SERVICE_PORT|g;
- s|%ADMINPORT%|$KEYSTONE_AUTH_PORT|g;
+ s|%PUBLICPORT%|$keystone_service_port|g;
+ s|%ADMINPORT%|$keystone_auth_port|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
s|%PUBLICWSGI%|$KEYSTONE_WSGI_DIR/main|g;
s|%ADMINWSGI%|$KEYSTONE_WSGI_DIR/admin|g;
+ s|%SSLENGINE%|$keystone_ssl|g;
+ s|%SSLCERTFILE%|$keystone_certfile|g;
+ s|%SSLKEYFILE%|$keystone_keyfile|g;
s|%USER%|$STACK_USER|g
- s|%ERRORLOGFORMAT%|$errorlogformat|g;
" -i $keystone_apache_conf
}
@@ -203,8 +217,13 @@
fi
# Set the URL advertised in the ``versions`` structure returned by the '/' route
- iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(public_port)s/"
- iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(admin_port)s/"
+ if is_service_enabled tls-proxy; then
+ iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/"
+ iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/"
+ else
+ iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(public_port)s/"
+ iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(admin_port)s/"
+ fi
iniset $KEYSTONE_CONF DEFAULT admin_bind_host "$KEYSTONE_ADMIN_BIND_HOST"
# Register SSL certificates if provided
@@ -283,7 +302,7 @@
fi
if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
- iniset $KEYSTONE_CONF DEFAULT debug "True"
+ iniset $KEYSTONE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
# Eliminate the %(asctime)s.%(msecs)03d from the log format strings
iniset $KEYSTONE_CONF DEFAULT logging_context_format_string "%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
iniset $KEYSTONE_CONF DEFAULT logging_default_format_string "%(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s"
@@ -415,7 +434,7 @@
iniset $conf_file $section auth_port $KEYSTONE_AUTH_PORT
iniset $conf_file $section auth_protocol $KEYSTONE_AUTH_PROTOCOL
iniset $conf_file $section identity_uri $KEYSTONE_AUTH_URI
- iniset $conf_file $section cafile $KEYSTONE_SSL_CA
+ iniset $conf_file $section cafile $SSL_BUNDLE_FILE
configure_API_version $conf_file $IDENTITY_API_VERSION $section
iniset $conf_file $section admin_tenant_name $SERVICE_TENANT_NAME
iniset $conf_file $section admin_user $admin_user
@@ -492,6 +511,9 @@
setup_develop $KEYSTONE_DIR
if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
+ if is_ssl_enabled_service "key"; then
+ enable_mod_ssl
+ fi
fi
}
@@ -499,8 +521,10 @@
function start_keystone {
# Get right service port for testing
local service_port=$KEYSTONE_SERVICE_PORT
+ local auth_protocol=$KEYSTONE_AUTH_PROTOCOL
if is_service_enabled tls-proxy; then
service_port=$KEYSTONE_SERVICE_PORT_INT
+ auth_protocol="http"
fi
if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
@@ -509,15 +533,19 @@
tail_log key /var/log/$APACHE_NAME/keystone.log
tail_log key-access /var/log/$APACHE_NAME/keystone_access.log
else
+ local EXTRA_PARAMS=""
+ if [ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]; then
+ EXTRA_PARAMS="--debug"
+ fi
# Start Keystone in a screen window
- run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF --debug"
+ run_process key "$KEYSTONE_DIR/bin/keystone-all --config-file $KEYSTONE_CONF $EXTRA_PARAMS"
fi
echo "Waiting for keystone to start..."
# Check that the keystone service is running. Even if the tls tunnel
# should be enabled, make sure the internal port is checked using
# unencryted traffic at this point.
- if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -k -s http://$KEYSTONE_SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/ >/dev/null; do sleep 1; done"; then
+ if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl --noproxy '*' -k -s $auth_protocol://$KEYSTONE_SERVICE_HOST:$service_port/v$IDENTITY_API_VERSION/ >/dev/null; do sleep 1; done"; then
die $LINENO "keystone did not start"
fi
diff --git a/lib/neutron b/lib/neutron
index 96cd47b..a48f519 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -69,6 +69,11 @@
PRIVATE_SUBNET_NAME=${PRIVATE_SUBNET_NAME:-"private-subnet"}
PUBLIC_SUBNET_NAME=${PUBLIC_SUBNET_NAME:-"public-subnet"}
+if is_ssl_enabled_service "neutron" || is_service_enabled tls-proxy; then
+ Q_PROTOCOL="https"
+fi
+
+
# Set up default directories
NEUTRON_DIR=$DEST/neutron
NEUTRONCLIENT_DIR=$DEST/python-neutronclient
@@ -105,8 +110,12 @@
Q_PLUGIN=${Q_PLUGIN:-ml2}
# Default Neutron Port
Q_PORT=${Q_PORT:-9696}
+# Default Neutron Internal Port when using TLS proxy
+Q_PORT_INT=${Q_PORT_INT:-19696}
# Default Neutron Host
Q_HOST=${Q_HOST:-$SERVICE_HOST}
+# Default protocol
+Q_PROTOCOL=${Q_PROTOCOL:-$SERVICE_PROTOCOL}
# Default admin username
Q_ADMIN_USERNAME=${Q_ADMIN_USERNAME:-neutron}
# Default auth strategy
@@ -409,7 +418,7 @@
iniset $NOVA_CONF neutron auth_strategy "$Q_AUTH_STRATEGY"
iniset $NOVA_CONF neutron admin_tenant_name "$SERVICE_TENANT_NAME"
iniset $NOVA_CONF neutron region_name "$REGION_NAME"
- iniset $NOVA_CONF neutron url "http://$Q_HOST:$Q_PORT"
+ iniset $NOVA_CONF neutron url "${Q_PROTOCOL}://$Q_HOST:$Q_PORT"
if [[ "$Q_USE_SECGROUP" == "True" ]]; then
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
@@ -448,13 +457,13 @@
function create_neutron_accounts {
local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
+ local service_role=$(openstack role list | awk "/ service / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "q-svc" ]]; then
local neutron_user=$(get_or_create_user "neutron" \
"$SERVICE_PASSWORD" $service_tenant)
- get_or_add_user_role $admin_role $neutron_user $service_tenant
+ get_or_add_user_role $service_role $neutron_user $service_tenant
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
@@ -462,9 +471,9 @@
"network" "Neutron Service")
get_or_create_endpoint $neutron_service \
"$REGION_NAME" \
- "http://$SERVICE_HOST:$Q_PORT/" \
- "http://$SERVICE_HOST:$Q_PORT/" \
- "http://$SERVICE_HOST:$Q_PORT/"
+ "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
+ "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/" \
+ "$Q_PROTOCOL://$SERVICE_HOST:$Q_PORT/"
fi
fi
}
@@ -590,12 +599,25 @@
# Start running processes, including screen
function start_neutron_service_and_check {
local cfg_file_options="$(determine_config_files neutron-server)"
+ local service_port=$Q_PORT
+ local service_protocol=$Q_PROTOCOL
+ if is_service_enabled tls-proxy; then
+ service_port=$Q_PORT_INT
+ service_protocol="http"
+ fi
# Start the Neutron service
run_process q-svc "python $NEUTRON_BIN_DIR/neutron-server $cfg_file_options"
echo "Waiting for Neutron to start..."
- if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- http://$Q_HOST:$Q_PORT; do sleep 1; done"; then
+ if is_ssl_enabled_service "neutron"; then
+ ssl_ca="--ca-certificate=${SSL_BUNDLE_FILE}"
+ fi
+ if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget ${ssl_ca} --no-proxy -q -O- $service_protocol://$Q_HOST:$service_port; do sleep 1; done"; then
die $LINENO "Neutron did not start"
fi
+ # Start proxy if enabled
+ if is_service_enabled tls-proxy; then
+ start_tls_proxy '*' $Q_PORT $Q_HOST $Q_PORT_INT &
+ fi
}
# Start running processes, including screen
@@ -730,6 +752,23 @@
setup_colorized_logging $NEUTRON_CONF DEFAULT project_id
fi
+ if is_service_enabled tls-proxy; then
+ # Set the service port for a proxy to take the original
+ iniset $NEUTRON_CONF DEFAULT bind_port "$Q_PORT_INT"
+ fi
+
+ if is_ssl_enabled_service "nova"; then
+ iniset $NEUTRON_CONF DEFAULT nova_ca_certificates_file "$SSL_BUNDLE_FILE"
+ fi
+
+ if is_ssl_enabled_service "neutron"; then
+ ensure_certificates NEUTRON
+
+ iniset $NEUTRON_CONF DEFAULT use_ssl True
+ iniset $NEUTRON_CONF DEFAULT ssl_cert_file "$NEUTRON_SSL_CERT"
+ iniset $NEUTRON_CONF DEFAULT ssl_key_file "$NEUTRON_SSL_KEY"
+ fi
+
_neutron_setup_rootwrap
}
@@ -850,6 +889,9 @@
cp $NEUTRON_DIR/etc/api-paste.ini $Q_API_PASTE_FILE
cp $NEUTRON_DIR/etc/policy.json $Q_POLICY_FILE
+ # allow neutron user to administer neutron to match neutron account
+ sed -i 's/"context_is_admin": "role:admin"/"context_is_admin": "role:admin or user_name:neutron"/g' $Q_POLICY_FILE
+
# Update either configuration file with plugin
iniset $NEUTRON_CONF DEFAULT core_plugin $Q_PLUGIN_CLASS
diff --git a/lib/neutron_plugins/cisco b/lib/neutron_plugins/cisco
index da90ee3..1406e37 100644
--- a/lib/neutron_plugins/cisco
+++ b/lib/neutron_plugins/cisco
@@ -20,38 +20,12 @@
# Specify the VLAN range
Q_CISCO_PLUGIN_VLAN_RANGES=${Q_CISCO_PLUGIN_VLAN_RANGES:-vlan:1:4094}
-# Specify ncclient package information
-NCCLIENT_DIR=$DEST/ncclient
-NCCLIENT_VERSION=${NCCLIENT_VERSION:-0.3.1}
-NCCLIENT_REPO=${NCCLIENT_REPO:-git://github.com/CiscoSystems/ncclient.git}
-NCCLIENT_BRANCH=${NCCLIENT_BRANCH:-master}
-
# This routine put a prefix on an existing function name
function _prefix_function {
declare -F $1 > /dev/null || die "$1 doesn't exist"
eval "$(echo "${2}_${1}()"; declare -f ${1} | tail -n +2)"
}
-function _has_ovs_subplugin {
- local subplugin
- for subplugin in ${Q_CISCO_PLUGIN_SUBPLUGINS[@]}; do
- if [[ "$subplugin" == "openvswitch" ]]; then
- return 0
- fi
- done
- return 1
-}
-
-function _has_nexus_subplugin {
- local subplugin
- for subplugin in ${Q_CISCO_PLUGIN_SUBPLUGINS[@]}; do
- if [[ "$subplugin" == "nexus" ]]; then
- return 0
- fi
- done
- return 1
-}
-
function _has_n1kv_subplugin {
local subplugin
for subplugin in ${Q_CISCO_PLUGIN_SUBPLUGINS[@]}; do
@@ -62,27 +36,6 @@
return 1
}
-# This routine populates the cisco config file with the information for
-# a particular nexus switch
-function _config_switch {
- local cisco_cfg_file=$1
- local switch_ip=$2
- local username=$3
- local password=$4
- local ssh_port=$5
- shift 5
-
- local section="NEXUS_SWITCH:$switch_ip"
- iniset $cisco_cfg_file $section username $username
- iniset $cisco_cfg_file $section password $password
- iniset $cisco_cfg_file $section ssh_port $ssh_port
-
- while [[ ${#@} != 0 ]]; do
- iniset $cisco_cfg_file $section $1 $2
- shift 2
- done
-}
-
# Prefix openvswitch plugin routines with "ovs" in order to differentiate from
# cisco plugin routines. This means, ovs plugin routines will coexist with cisco
# plugin routines in this script.
@@ -98,73 +51,17 @@
_prefix_function neutron_plugin_setup_interface_driver ovs
_prefix_function has_neutron_plugin_security_group ovs
-# Check the version of the installed ncclient package
-function check_ncclient_version {
-python << EOF
-version = '$NCCLIENT_VERSION'
-import sys
-try:
- import pkg_resources
- import ncclient
- module_version = pkg_resources.get_distribution('ncclient').version
- if version != module_version:
- sys.exit(1)
-except:
- sys.exit(1)
-EOF
-}
-
-# Install the ncclient package
-function install_ncclient {
- git_clone $NCCLIENT_REPO $NCCLIENT_DIR $NCCLIENT_BRANCH
- (cd $NCCLIENT_DIR; sudo python setup.py install)
-}
-
-# Check if the required version of ncclient has been installed
-function is_ncclient_installed {
- # Check if the Cisco ncclient repository exists
- if [[ -d $NCCLIENT_DIR ]]; then
- remotes=$(cd $NCCLIENT_DIR; git remote -v | grep fetch | awk '{ print $2}')
- for remote in $remotes; do
- if [[ $remote == $NCCLIENT_REPO ]]; then
- break;
- fi
- done
- if [[ $remote != $NCCLIENT_REPO ]]; then
- return 1
- fi
- else
- return 1
- fi
-
- # Check if the ncclient is installed with the right version
- if ! check_ncclient_version; then
- return 1
- fi
- return 0
-}
-
function has_neutron_plugin_security_group {
- if _has_ovs_subplugin; then
- ovs_has_neutron_plugin_security_group
- else
- return 1
- fi
+ return 1
}
function is_neutron_ovs_base_plugin {
- # Cisco uses OVS if openvswitch subplugin is deployed
- _has_ovs_subplugin
return
}
# populate required nova configuration parameters
function neutron_plugin_create_nova_conf {
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_create_nova_conf
- else
- _neutron_ovs_base_configure_nova_vif_driver
- fi
+ _neutron_ovs_base_configure_nova_vif_driver
}
function neutron_plugin_install_agent_packages {
@@ -177,32 +74,14 @@
# setup default subplugins
if [ ! -v Q_CISCO_PLUGIN_SUBPLUGINS ]; then
declare -ga Q_CISCO_PLUGIN_SUBPLUGINS
- Q_CISCO_PLUGIN_SUBPLUGINS=(openvswitch nexus)
+ Q_CISCO_PLUGIN_SUBPLUGINS=(n1kv)
fi
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_configure_common
- Q_PLUGIN_EXTRA_CONF_PATH=etc/neutron/plugins/cisco
- Q_PLUGIN_EXTRA_CONF_FILES=(cisco_plugins.ini)
- # Copy extra config files to /etc so that they can be modified
- # later according to Cisco-specific localrc settings.
- mkdir -p /$Q_PLUGIN_EXTRA_CONF_PATH
- local f
- local extra_conf_file
- for (( f=0; $f < ${#Q_PLUGIN_EXTRA_CONF_FILES[@]}; f+=1 )); do
- extra_conf_file=$Q_PLUGIN_EXTRA_CONF_PATH/${Q_PLUGIN_EXTRA_CONF_FILES[$f]}
- cp $NEUTRON_DIR/$extra_conf_file /$extra_conf_file
- done
- else
- Q_PLUGIN_CONF_PATH=etc/neutron/plugins/cisco
- Q_PLUGIN_CONF_FILENAME=cisco_plugins.ini
- fi
+ Q_PLUGIN_CONF_PATH=etc/neutron/plugins/cisco
+ Q_PLUGIN_CONF_FILENAME=cisco_plugins.ini
Q_PLUGIN_CLASS="neutron.plugins.cisco.network_plugin.PluginV2"
}
function neutron_plugin_configure_debug_command {
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_configure_debug_command
- fi
}
function neutron_plugin_configure_dhcp_agent {
@@ -210,53 +89,6 @@
}
function neutron_plugin_configure_l3_agent {
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_configure_l3_agent
- fi
-}
-
-function _configure_nexus_subplugin {
- local cisco_cfg_file=$1
-
- # Install a known compatible ncclient from the Cisco repository if necessary
- if ! is_ncclient_installed; then
- # Preserve the two global variables
- local offline=$OFFLINE
- local reclone=$RECLONE
- # Change their values to allow installation
- OFFLINE=False
- RECLONE=yes
- install_ncclient
- # Restore their values
- OFFLINE=$offline
- RECLONE=$reclone
- fi
-
- # Setup default nexus switch information
- if [ ! -v Q_CISCO_PLUGIN_SWITCH_INFO ]; then
- declare -A Q_CISCO_PLUGIN_SWITCH_INFO
- HOST_NAME=$(hostname)
- Q_CISCO_PLUGIN_SWITCH_INFO=([1.1.1.1]=stack:stack:22:${HOST_NAME}:1/10)
- else
- iniset $cisco_cfg_file CISCO nexus_driver neutron.plugins.cisco.nexus.cisco_nexus_network_driver_v2.CiscoNEXUSDriver
- fi
-
- # Setup the switch configurations
- local nswitch
- local sw_info
- local segment
- local sw_info_array
- declare -i count=0
- for nswitch in ${!Q_CISCO_PLUGIN_SWITCH_INFO[@]}; do
- sw_info=${Q_CISCO_PLUGIN_SWITCH_INFO[$nswitch]}
- sw_info_array=${sw_info//:/ }
- sw_info_array=( $sw_info_array )
- count=${#sw_info_array[@]}
- if [[ $count < 5 || $(( ($count-3) % 2 )) != 0 ]]; then
- die $LINENO "Incorrect switch configuration: ${Q_CISCO_PLUGIN_SWITCH_INFO[$nswitch]}"
- fi
- _config_switch $cisco_cfg_file $nswitch ${sw_info_array[@]}
- done
}
# Configure n1kv plugin
@@ -279,48 +111,29 @@
}
function neutron_plugin_configure_plugin_agent {
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_configure_plugin_agent
- fi
}
function neutron_plugin_configure_service {
local subplugin
local cisco_cfg_file
- if _has_ovs_subplugin; then
- ovs_neutron_plugin_configure_service
- cisco_cfg_file=/${Q_PLUGIN_EXTRA_CONF_FILES[0]}
- else
- cisco_cfg_file=/$Q_PLUGIN_CONF_FILE
- fi
+ cisco_cfg_file=/$Q_PLUGIN_CONF_FILE
# Setup the [CISCO_PLUGINS] section
if [[ ${#Q_CISCO_PLUGIN_SUBPLUGINS[@]} > 2 ]]; then
die $LINENO "At most two subplugins are supported."
fi
- if _has_ovs_subplugin && _has_n1kv_subplugin; then
- die $LINENO "OVS subplugin and n1kv subplugin cannot coexist"
- fi
-
# Setup the subplugins
- inicomment $cisco_cfg_file CISCO_PLUGINS nexus_plugin
inicomment $cisco_cfg_file CISCO_PLUGINS vswitch_plugin
inicomment $cisco_cfg_file CISCO_TEST host
for subplugin in ${Q_CISCO_PLUGIN_SUBPLUGINS[@]}; do
case $subplugin in
- nexus) iniset $cisco_cfg_file CISCO_PLUGINS nexus_plugin neutron.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin;;
- openvswitch) iniset $cisco_cfg_file CISCO_PLUGINS vswitch_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2;;
n1kv) iniset $cisco_cfg_file CISCO_PLUGINS vswitch_plugin neutron.plugins.cisco.n1kv.n1kv_neutron_plugin.N1kvNeutronPluginV2;;
*) die $LINENO "Unsupported cisco subplugin: $subplugin";;
esac
done
- if _has_nexus_subplugin; then
- _configure_nexus_subplugin $cisco_cfg_file
- fi
-
if _has_n1kv_subplugin; then
_configure_n1kv_subplugin $cisco_cfg_file
fi
diff --git a/lib/neutron_plugins/plumgrid b/lib/neutron_plugins/plumgrid
index 37b9e4c..7950ac0 100644
--- a/lib/neutron_plugins/plumgrid
+++ b/lib/neutron_plugins/plumgrid
@@ -45,8 +45,8 @@
}
function has_neutron_plugin_security_group {
- # False
- return 1
+ # return 0 means enabled
+ return 0
}
function neutron_plugin_check_adv_test_requirements {
diff --git a/lib/neutron_plugins/services/loadbalancer b/lib/neutron_plugins/services/loadbalancer
index 78e7738..f84b710 100644
--- a/lib/neutron_plugins/services/loadbalancer
+++ b/lib/neutron_plugins/services/loadbalancer
@@ -10,11 +10,8 @@
LBAAS_PLUGIN=neutron.services.loadbalancer.plugin.LoadBalancerPlugin
function neutron_agent_lbaas_install_agent_packages {
- if is_ubuntu || is_fedora; then
+ if is_ubuntu || is_fedora || is_suse; then
install_package haproxy
- elif is_suse; then
- ### FIXME: Find out if package can be pushed to Factory
- echo "HAProxy packages can be installed from server:http project in OBS"
fi
}
diff --git a/lib/nova b/lib/nova
index 0005090..fa57432 100644
--- a/lib/nova
+++ b/lib/nova
@@ -44,11 +44,20 @@
NOVA_API_PASTE_INI=${NOVA_API_PASTE_INI:-$NOVA_CONF_DIR/api-paste.ini}
+if is_ssl_enabled_service "nova" || is_service_enabled tls-proxy; then
+ NOVA_SERVICE_PROTOCOL="https"
+ EC2_SERVICE_PROTOCOL="https"
+else
+ EC2_SERVICE_PROTOCOL="http"
+fi
+
# Public facing bits
NOVA_SERVICE_HOST=${NOVA_SERVICE_HOST:-$SERVICE_HOST}
NOVA_SERVICE_PORT=${NOVA_SERVICE_PORT:-8774}
NOVA_SERVICE_PORT_INT=${NOVA_SERVICE_PORT_INT:-18774}
NOVA_SERVICE_PROTOCOL=${NOVA_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
+EC2_SERVICE_PORT=${EC2_SERVICE_PORT:-8773}
+EC2_SERVICE_PORT_INT=${EC2_SERVICE_PORT_INT:-18773}
# Support entry points installation of console scripts
if [[ -d $NOVA_DIR/bin ]]; then
@@ -375,9 +384,9 @@
"ec2" "EC2 Compatibility Layer")
get_or_create_endpoint $ec2_service \
"$REGION_NAME" \
- "http://$SERVICE_HOST:8773/services/Cloud" \
- "http://$SERVICE_HOST:8773/services/Admin" \
- "http://$SERVICE_HOST:8773/services/Cloud"
+ "$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/services/Cloud" \
+ "$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/services/Admin" \
+ "$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/services/Cloud"
fi
fi
@@ -404,7 +413,6 @@
rm -f $NOVA_CONF
iniset $NOVA_CONF DEFAULT verbose "True"
iniset $NOVA_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
- iniset $NOVA_CONF DEFAULT auth_strategy "keystone"
iniset $NOVA_CONF DEFAULT allow_resize_to_same_host "True"
iniset $NOVA_CONF DEFAULT allow_migrate_to_same_host "True"
iniset $NOVA_CONF DEFAULT api_paste_config "$NOVA_API_PASTE_INI"
@@ -412,7 +420,6 @@
iniset $NOVA_CONF DEFAULT scheduler_driver "$SCHEDULER"
iniset $NOVA_CONF DEFAULT dhcpbridge_flagfile "$NOVA_CONF"
iniset $NOVA_CONF DEFAULT force_dhcp_release "True"
- iniset $NOVA_CONF DEFAULT fixed_range ""
iniset $NOVA_CONF DEFAULT default_floating_pool "$PUBLIC_NETWORK_NAME"
iniset $NOVA_CONF DEFAULT s3_host "$SERVICE_HOST"
iniset $NOVA_CONF DEFAULT s3_port "$S3_SERVICE_PORT"
@@ -441,6 +448,15 @@
configure_auth_token_middleware $NOVA_CONF nova $NOVA_AUTH_CACHE_DIR
fi
+ if is_service_enabled cinder; then
+ if is_ssl_enabled_service "cinder" || is_service_enabled tls-proxy; then
+ CINDER_SERVICE_HOST=${CINDER_SERVICE_HOST:-$SERVICE_HOST}
+ CINDER_SERVICE_PORT=${CINDER_SERVICE_PORT:-8776}
+ iniset $NOVA_CONF cinder endpoint_template "https://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/%(project_id)s"
+ iniset $NOVA_CONF cinder ca_certificates_file $SSL_BUNDLE_FILE
+ fi
+ fi
+
if [ -n "$NOVA_STATE_PATH" ]; then
iniset $NOVA_CONF DEFAULT state_path "$NOVA_STATE_PATH"
iniset $NOVA_CONF DEFAULT lock_path "$NOVA_STATE_PATH"
@@ -508,12 +524,31 @@
fi
iniset $NOVA_CONF DEFAULT ec2_dmz_host "$EC2_DMZ_HOST"
+ iniset $NOVA_CONF DEFAULT keystone_ec2_url $KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v2.0/ec2tokens
iniset_rpc_backend nova $NOVA_CONF DEFAULT
- iniset $NOVA_CONF glance api_servers "$GLANCE_HOSTPORT"
+ iniset $NOVA_CONF glance api_servers "${GLANCE_SERVICE_PROTOCOL}://${GLANCE_HOSTPORT}"
- iniset $NOVA_CONF DEFAULT osci_compute_workers "$API_WORKERS"
+ iniset $NOVA_CONF DEFAULT osapi_compute_workers "$API_WORKERS"
iniset $NOVA_CONF DEFAULT ec2_workers "$API_WORKERS"
iniset $NOVA_CONF DEFAULT metadata_workers "$API_WORKERS"
+
+ if is_ssl_enabled_service glance || is_service_enabled tls-proxy; then
+ iniset $NOVA_CONF DEFAULT glance_protocol https
+ fi
+
+ # Register SSL certificates if provided
+ if is_ssl_enabled_service nova; then
+ ensure_certificates NOVA
+
+ iniset $NOVA_CONF DEFAULT ssl_cert_file "$NOVA_SSL_CERT"
+ iniset $NOVA_CONF DEFAULT ssl_key_file "$NOVA_SSL_KEY"
+
+ iniset $NOVA_CONF DEFAULT enabled_ssl_apis "$NOVA_ENABLED_APIS"
+ fi
+
+ if is_service_enabled tls-proxy; then
+ iniset $NOVA_CONF DEFAULT ec2_listen_port $EC2_SERVICE_PORT_INT
+ fi
}
function init_nova_cells {
@@ -642,19 +677,22 @@
function start_nova_api {
# Get right service port for testing
local service_port=$NOVA_SERVICE_PORT
+ local service_protocol=$NOVA_SERVICE_PROTOCOL
if is_service_enabled tls-proxy; then
service_port=$NOVA_SERVICE_PORT_INT
+ service_protocol="http"
fi
run_process n-api "$NOVA_BIN_DIR/nova-api"
echo "Waiting for nova-api to start..."
- if ! wait_for_service $SERVICE_TIMEOUT http://$SERVICE_HOST:$service_port; then
+ if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$SERVICE_HOST:$service_port; then
die $LINENO "nova-api did not start"
fi
# Start proxies if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy '*' $NOVA_SERVICE_PORT $NOVA_SERVICE_HOST $NOVA_SERVICE_PORT_INT &
+ start_tls_proxy '*' $EC2_SERVICE_PORT $NOVA_SERVICE_HOST $EC2_SERVICE_PORT_INT &
fi
}
diff --git a/lib/oslo b/lib/oslo
index e5fa37e..a20aa14 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -20,21 +20,21 @@
# Defaults
# --------
-CLIFF_DIR=$DEST/cliff
-OSLOCFG_DIR=$DEST/oslo.config
-OSLOCON_DIR=$DEST/oslo.concurrency
-OSLODB_DIR=$DEST/oslo.db
-OSLOI18N_DIR=$DEST/oslo.i18n
-OSLOLOG_DIR=$DEST/oslo.log
-OSLOMID_DIR=$DEST/oslo.middleware
-OSLOMSG_DIR=$DEST/oslo.messaging
-OSLORWRAP_DIR=$DEST/oslo.rootwrap
-OSLOSERIALIZATION_DIR=$DEST/oslo.serialization
-OSLOUTILS_DIR=$DEST/oslo.utils
-OSLOVMWARE_DIR=$DEST/oslo.vmware
-PYCADF_DIR=$DEST/pycadf
-STEVEDORE_DIR=$DEST/stevedore
-TASKFLOW_DIR=$DEST/taskflow
+GITDIR["cliff"]=$DEST/cliff
+GITDIR["oslo.config"]=$DEST/oslo.config
+GITDIR["oslo.concurrency"]=$DEST/oslo.concurrency
+GITDIR["oslo.db"]=$DEST/oslo.db
+GITDIR["oslo.i18n"]=$DEST/oslo.i18n
+GITDIR["oslo.log"]=$DEST/oslo.log
+GITDIR["oslo.middleware"]=$DEST/oslo.middleware
+GITDIR["oslo.messaging"]=$DEST/oslo.messaging
+GITDIR["oslo.rootwrap"]=$DEST/oslo.rootwrap
+GITDIR["oslo.serialization"]=$DEST/oslo.serialization
+GITDIR["oslo.utils"]=$DEST/oslo.utils
+GITDIR["oslo.vmware"]=$DEST/oslo.vmware
+GITDIR["pycadf"]=$DEST/pycadf
+GITDIR["stevedore"]=$DEST/stevedore
+GITDIR["taskflow"]=$DEST/taskflow
# Support entry points installation of console scripts
OSLO_BIN_DIR=$(get_python_exec_prefix)
@@ -42,52 +42,31 @@
# Entry Points
# ------------
+function _do_install_oslo_lib {
+ local name=$1
+ if use_library_from_git "$name"; then
+ git_clone_by_name "$name"
+ setup_lib "$name"
+ fi
+}
+
# install_oslo() - Collect source and prepare
function install_oslo {
- git_clone $CLIFF_REPO $CLIFF_DIR $CLIFF_BRANCH
- setup_install $CLIFF_DIR
-
- git_clone $OSLOI18N_REPO $OSLOI18N_DIR $OSLOI18N_BRANCH
- setup_install $OSLOI18N_DIR
-
- git_clone $OSLOUTILS_REPO $OSLOUTILS_DIR $OSLOUTILS_BRANCH
- setup_install $OSLOUTILS_DIR
-
- git_clone $OSLOSERIALIZATION_REPO $OSLOSERIALIZATION_DIR $OSLOSERIALIZATION_BRANCH
- setup_install $OSLOSERIALIZATION_DIR
-
- git_clone $OSLOCFG_REPO $OSLOCFG_DIR $OSLOCFG_BRANCH
- setup_install $OSLOCFG_DIR
-
- git_clone $OSLOCON_REPO $OSLOCON_DIR $OSLOCON_BRANCH
- setup_install $OSLOCON_DIR
-
- git_clone $OSLOLOG_REPO $OSLOLOG_DIR $OSLOLOG_BRANCH
- setup_install $OSLOLOG_DIR
-
- git_clone $OSLOMID_REPO $OSLOMID_DIR $OSLOMID_BRANCH
- setup_install $OSLOMID_DIR
-
- git_clone $OSLOMSG_REPO $OSLOMSG_DIR $OSLOMSG_BRANCH
- setup_install $OSLOMSG_DIR
-
- git_clone $OSLORWRAP_REPO $OSLORWRAP_DIR $OSLORWRAP_BRANCH
- setup_install $OSLORWRAP_DIR
-
- git_clone $OSLODB_REPO $OSLODB_DIR $OSLODB_BRANCH
- setup_install $OSLODB_DIR
-
- git_clone $OSLOVMWARE_REPO $OSLOVMWARE_DIR $OSLOVMWARE_BRANCH
- setup_install $OSLOVMWARE_DIR
-
- git_clone $PYCADF_REPO $PYCADF_DIR $PYCADF_BRANCH
- setup_install $PYCADF_DIR
-
- git_clone $STEVEDORE_REPO $STEVEDORE_DIR $STEVEDORE_BRANCH
- setup_install $STEVEDORE_DIR
-
- git_clone $TASKFLOW_REPO $TASKFLOW_DIR $TASKFLOW_BRANCH
- setup_install $TASKFLOW_DIR
+ _do_install_oslo_lib "cliff"
+ _do_install_oslo_lib "oslo.i18n"
+ _do_install_oslo_lib "oslo.utils"
+ _do_install_oslo_lib "oslo.serialization"
+ _do_install_oslo_lib "oslo.config"
+ _do_install_oslo_lib "oslo.concurrency"
+ _do_install_oslo_lib "oslo.log"
+ _do_install_oslo_lib "oslo.middleware"
+ _do_install_oslo_lib "oslo.messaging"
+ _do_install_oslo_lib "oslo.rootwrap"
+ _do_install_oslo_lib "oslo.db"
+ _do_install_oslo_lib "oslo.vmware"
+ _do_install_oslo_lib "pycadf"
+ _do_install_oslo_lib "stevedore"
+ _do_install_oslo_lib "taskflow"
}
# Restore xtrace
diff --git a/lib/rpc_backend b/lib/rpc_backend
index f2d2859..de82fe1 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -44,7 +44,7 @@
local rpc_backend_cnt=0
for svc in qpid zeromq rabbit; do
is_service_enabled $svc &&
- ((rpc_backend_cnt++))
+ (( rpc_backend_cnt++ )) || true
done
if [ "$rpc_backend_cnt" -gt 1 ]; then
echo "ERROR: only one rpc backend may be enabled,"
diff --git a/lib/swift b/lib/swift
index 3c31dd2..21ed920 100644
--- a/lib/swift
+++ b/lib/swift
@@ -29,6 +29,10 @@
# Defaults
# --------
+if is_ssl_enabled_service "s-proxy" || is_service_enabled tls-proxy; then
+ SWIFT_SERVICE_PROTOCOL="https"
+fi
+
# Set up default directories
SWIFT_DIR=$DEST/swift
SWIFTCLIENT_DIR=$DEST/python-swiftclient
@@ -36,6 +40,9 @@
SWIFT_APACHE_WSGI_DIR=${SWIFT_APACHE_WSGI_DIR:-/var/www/swift}
SWIFT3_DIR=$DEST/swift3
+SWIFT_SERVICE_PROTOCOL=${SWIFT_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
+SWIFT_DEFAULT_BIND_PORT_INT=${SWIFT_DEFAULT_BIND_PORT_INT:-8081}
+
# TODO: add logging to different location.
# Set ``SWIFT_DATA_DIR`` to the location of swift drives and objects.
@@ -244,7 +251,7 @@
# This function generates an object/container/account configuration
# emulating 4 nodes on different ports
-function generate_swift_config {
+function generate_swift_config_services {
local swift_node_config=$1
local node_id=$2
local bind_port=$3
@@ -279,6 +286,10 @@
iniuncomment ${swift_node_config} ${server_type}-replicator vm_test_mode
iniset ${swift_node_config} ${server_type}-replicator vm_test_mode yes
+
+ # Using a sed and not iniset/iniuncomment because we want to a global
+ # modification and make sure it works for new sections.
+ sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
}
@@ -334,7 +345,18 @@
iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT log_level DEBUG
iniuncomment ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port
- iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT:-8080}
+ if is_service_enabled tls-proxy; then
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT_INT}
+ else
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT bind_port ${SWIFT_DEFAULT_BIND_PORT:-8080}
+ fi
+
+ if is_ssl_enabled_service s-proxy; then
+ ensure_certificates SWIFT
+
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT cert_file "$SWIFT_SSL_CERT"
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} DEFAULT key_file "$SWIFT_SSL_KEY"
+ fi
# Devstack is commonly run in a small slow environment, so bump the
# timeouts up.
@@ -401,7 +423,7 @@
auth_port = ${KEYSTONE_AUTH_PORT}
auth_host = ${KEYSTONE_AUTH_HOST}
auth_protocol = ${KEYSTONE_AUTH_PROTOCOL}
-cafile = ${KEYSTONE_SSL_CA}
+cafile = ${SSL_BUNDLE_FILE}
auth_token = ${SERVICE_TOKEN}
admin_token = ${SERVICE_TOKEN}
@@ -418,23 +440,18 @@
for node_number in ${SWIFT_REPLICAS_SEQ}; do
local swift_node_config=${SWIFT_CONF_DIR}/object-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/object-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $(( OBJECT_PORT_BASE + 10 * (node_number - 1) )) object
+ generate_swift_config_services ${swift_node_config} ${node_number} $(( OBJECT_PORT_BASE + 10 * (node_number - 1) )) object
iniset ${swift_node_config} filter:recon recon_cache_path ${SWIFT_DATA_DIR}/cache
- # Using a sed and not iniset/iniuncomment because we want to a global
- # modification and make sure it works for new sections.
- sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
swift_node_config=${SWIFT_CONF_DIR}/container-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/container-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $(( CONTAINER_PORT_BASE + 10 * (node_number - 1) )) container
+ generate_swift_config_services ${swift_node_config} ${node_number} $(( CONTAINER_PORT_BASE + 10 * (node_number - 1) )) container
iniuncomment ${swift_node_config} app:container-server allow_versions
iniset ${swift_node_config} app:container-server allow_versions "true"
- sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
swift_node_config=${SWIFT_CONF_DIR}/account-server/${node_number}.conf
cp ${SWIFT_DIR}/etc/account-server.conf-sample ${swift_node_config}
- generate_swift_config ${swift_node_config} ${node_number} $(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) )) account
- sed -i -e "s,#[ ]*recon_cache_path .*,recon_cache_path = ${SWIFT_DATA_DIR}/cache," ${swift_node_config}
+ generate_swift_config_services ${swift_node_config} ${node_number} $(( ACCOUNT_PORT_BASE + 10 * (node_number - 1) )) account
done
# Set new accounts in tempauth to match keystone tenant/user (to make testing easier)
@@ -560,9 +577,9 @@
"object-store" "Swift Service")
get_or_create_endpoint $swift_service \
"$REGION_NAME" \
- "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s" \
- "http://$SERVICE_HOST:8080" \
- "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s"
+ "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s" \
+ "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080" \
+ "$SWIFT_SERVICE_PROTOCOL://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s"
fi
local swift_tenant_test1=$(get_or_create_project swifttenanttest1)
@@ -675,6 +692,10 @@
for type in proxy ${todo}; do
swift-init --run-dir=${SWIFT_DATA_DIR}/run ${type} stop || true
done
+ if is_service_enabled tls-proxy; then
+ local proxy_port=${SWIFT_DEFAULT_BIND_PORT:-8080}
+ start_tls_proxy '*' $proxy_port $SERVICE_HOST $SWIFT_DEFAULT_BIND_PORT_INT &
+ fi
run_process s-proxy "$SWIFT_DIR/bin/swift-proxy-server ${SWIFT_CONF_DIR}/proxy-server.conf -v"
if [[ ${SWIFT_REPLICAS} == 1 ]]; then
for type in object container account; do
diff --git a/lib/tempest b/lib/tempest
index 0dfeb86..d677c7e 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -314,7 +314,7 @@
iniset $TEMPEST_CONFIG network-feature-disabled api_extensions ${DISABLE_NETWORK_API_EXTENSIONS}
# boto
- iniset $TEMPEST_CONFIG boto ec2_url "http://$SERVICE_HOST:8773/services/Cloud"
+ iniset $TEMPEST_CONFIG boto ec2_url "$EC2_SERVICE_PROTOCOL://$SERVICE_HOST:8773/services/Cloud"
iniset $TEMPEST_CONFIG boto s3_url "http://$SERVICE_HOST:${S3_SERVICE_PORT:-3333}"
iniset $TEMPEST_CONFIG boto s3_materials_path "$BOTO_MATERIALS_PATH"
iniset $TEMPEST_CONFIG boto ari_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd.manifest.xml
diff --git a/lib/tls b/lib/tls
index 061c1ca..15e8692 100644
--- a/lib/tls
+++ b/lib/tls
@@ -14,6 +14,7 @@
#
# - configure_CA
# - init_CA
+# - cleanup_CA
# - configure_proxy
# - start_tls_proxy
@@ -27,6 +28,7 @@
# - start_tls_proxy HOST_IP 5000 localhost 5000
# - ensure_certificates
# - is_ssl_enabled_service
+# - enable_mod_ssl
# Defaults
# --------
@@ -34,14 +36,9 @@
if is_service_enabled tls-proxy; then
# TODO(dtroyer): revisit this below after the search for HOST_IP has been done
TLS_IP=${TLS_IP:-$SERVICE_IP}
-
- # Set the default ``SERVICE_PROTOCOL`` for TLS
- SERVICE_PROTOCOL=https
fi
-# Make up a hostname for cert purposes
-# will be added to /etc/hosts?
-DEVSTACK_HOSTNAME=secure.devstack.org
+DEVSTACK_HOSTNAME=$(hostname -f)
DEVSTACK_CERT_NAME=devstack-cert
DEVSTACK_CERT=$DATA_DIR/$DEVSTACK_CERT_NAME.pem
@@ -209,6 +206,29 @@
# Create the CA bundle
cat $ROOT_CA_DIR/cacert.pem $INT_CA_DIR/cacert.pem >>$INT_CA_DIR/ca-chain.pem
+ cat $INT_CA_DIR/ca-chain.pem >> $SSL_BUNDLE_FILE
+
+ if is_fedora; then
+ sudo cp $INT_CA_DIR/ca-chain.pem /usr/share/pki/ca-trust-source/anchors/devstack-chain.pem
+ sudo update-ca-trust
+ elif is_ubuntu; then
+ sudo cp $INT_CA_DIR/ca-chain.pem /usr/local/share/ca-certificates/devstack-int.crt
+ sudo cp $ROOT_CA_DIR/cacert.pem /usr/local/share/ca-certificates/devstack-root.crt
+ sudo update-ca-certificates
+ fi
+}
+
+# Clean up the CA files
+# cleanup_CA
+function cleanup_CA {
+ if is_fedora; then
+ sudo rm -f /usr/share/pki/ca-trust-source/anchors/devstack-chain.pem
+ sudo update-ca-trust
+ elif is_ubuntu; then
+ sudo rm -f /usr/local/share/ca-certificates/devstack-int.crt
+ sudo rm -f /usr/local/share/ca-certificates/devstack-root.crt
+ sudo update-ca-certificates
+ fi
}
# Create an initial server cert
@@ -331,6 +351,9 @@
function is_ssl_enabled_service {
local services=$@
local service=""
+ if [ "$USE_SSL" == "False" ]; then
+ return 1
+ fi
for service in ${services}; do
[[ ,${SSL_ENABLED_SERVICES}, =~ ,${service}, ]] && return 0
done
@@ -345,8 +368,12 @@
# The function expects to find a certificate, key and CA certificate in the
# variables {service}_SSL_CERT, {service}_SSL_KEY and {service}_SSL_CA. For
# example for keystone this would be KEYSTONE_SSL_CERT, KEYSTONE_SSL_KEY and
-# KEYSTONE_SSL_CA. If it does not find these certificates the program will
-# quit.
+# KEYSTONE_SSL_CA.
+#
+# If it does not find these certificates then the devstack-issued server
+# certificate, key and CA certificate will be associated with the service.
+#
+# If only some of the variables are provided then the function will quit.
function ensure_certificates {
local service=$1
@@ -358,7 +385,15 @@
local key=${!key_var}
local ca=${!ca_var}
- if [[ -z "$cert" || -z "$key" || -z "$ca" ]]; then
+ if [[ -z "$cert" && -z "$key" && -z "$ca" ]]; then
+ local cert="$INT_CA_DIR/$DEVSTACK_CERT_NAME.crt"
+ local key="$INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key"
+ local ca="$INT_CA_DIR/ca-chain.pem"
+ eval ${service}_SSL_CERT=\$cert
+ eval ${service}_SSL_KEY=\$key
+ eval ${service}_SSL_CA=\$ca
+ return # the CA certificate is already in the bundle
+ elif [[ -z "$cert" || -z "$key" || -z "$ca" ]]; then
die $LINENO "Missing either the ${cert_var} ${key_var} or ${ca_var}" \
"variable to enable SSL for ${service}"
fi
@@ -366,6 +401,21 @@
cat $ca >> $SSL_BUNDLE_FILE
}
+# Enable the mod_ssl plugin in Apache
+function enable_mod_ssl {
+ echo "Enabling mod_ssl"
+
+ if is_ubuntu; then
+ sudo a2enmod ssl
+ elif is_fedora; then
+ # Fedora enables mod_ssl by default
+ :
+ fi
+ if ! sudo `which httpd || which apache2ctl` -M | grep -w -q ssl_module; then
+ die $LINENO "mod_ssl is not enabled in apache2/httpd, please check for it manually and run stack.sh again"
+ fi
+}
+
# Proxy Functions
# ===============
diff --git a/lib/zaqar b/lib/zaqar
index 93b727e..b8570eb 100644
--- a/lib/zaqar
+++ b/lib/zaqar
@@ -20,6 +20,7 @@
# start_zaqar
# stop_zaqar
# cleanup_zaqar
+# cleanup_zaqar_mongodb
# Save trace setting
XTRACE=$(set +o | grep xtrace)
@@ -72,9 +73,17 @@
return 1
}
-# cleanup_zaqar() - Remove residual data files, anything left over from previous
-# runs that a clean run would need to clean up
+# cleanup_zaqar() - Cleans up general things from previous
+# runs and storage specific left overs.
function cleanup_zaqar {
+ if [ "$ZAQAR_BACKEND" = 'mongodb' ] ; then
+ cleanup_zaqar_mongodb
+ fi
+}
+
+# cleanup_zaqar_mongodb() - Remove residual data files, anything left over from previous
+# runs that a clean run would need to clean up
+function cleanup_zaqar_mongodb {
if ! timeout $SERVICE_TIMEOUT sh -c "while ! mongo zaqar --eval 'db.dropDatabase();'; do sleep 1; done"; then
die $LINENO "Mongo DB did not start"
else
@@ -116,8 +125,25 @@
iniset $ZAQAR_CONF drivers storage mongodb
iniset $ZAQAR_CONF 'drivers:storage:mongodb' uri mongodb://localhost:27017/zaqar
configure_mongodb
- cleanup_zaqar
+ elif [ "$ZAQAR_BACKEND" = 'redis' ] ; then
+ iniset $ZAQAR_CONF drivers storage redis
+ iniset $ZAQAR_CONF 'drivers:storage:redis' uri redis://localhost:6379
+ configure_redis
fi
+
+ cleanup_zaqar
+}
+
+function configure_redis {
+ if is_ubuntu; then
+ install_package redis-server
+ elif is_fedora; then
+ install_package redis
+ else
+ exit_distro_not_supported "redis installation"
+ fi
+
+ install_package python-redis
}
function configure_mongodb {
diff --git a/stack.sh b/stack.sh
index c20e610..0cec623 100755
--- a/stack.sh
+++ b/stack.sh
@@ -34,6 +34,9 @@
# Make sure umask is sane
umask 022
+# Not all distros have sbin in PATH for regular users.
+PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin
+
# Keep track of the devstack directory
TOP_DIR=$(cd $(dirname "$0") && pwd)
@@ -177,9 +180,6 @@
exit 1
fi
-# Set up logging level
-VERBOSE=$(trueorfalse True $VERBOSE)
-
# Configure sudo
# --------------
@@ -285,6 +285,182 @@
fi
+# Configure Logging
+# -----------------
+
+# Set up logging level
+VERBOSE=$(trueorfalse True $VERBOSE)
+
+# Draw a spinner so the user knows something is happening
+function spinner {
+ local delay=0.75
+ local spinstr='/-\|'
+ printf "..." >&3
+ while [ true ]; do
+ local temp=${spinstr#?}
+ printf "[%c]" "$spinstr" >&3
+ local spinstr=$temp${spinstr%"$temp"}
+ sleep $delay
+ printf "\b\b\b" >&3
+ done
+}
+
+function kill_spinner {
+ if [ ! -z "$LAST_SPINNER_PID" ]; then
+ kill >/dev/null 2>&1 $LAST_SPINNER_PID
+ printf "\b\b\bdone\n" >&3
+ fi
+}
+
+# Echo text to the log file, summary log file and stdout
+# echo_summary "something to say"
+function echo_summary {
+ if [[ -t 3 && "$VERBOSE" != "True" ]]; then
+ kill_spinner
+ echo -n -e $@ >&6
+ spinner &
+ LAST_SPINNER_PID=$!
+ else
+ echo -e $@ >&6
+ fi
+}
+
+# Echo text only to stdout, no log files
+# echo_nolog "something not for the logs"
+function echo_nolog {
+ echo $@ >&3
+}
+
+if [[ is_fedora && $DISTRO == "rhel6" ]]; then
+ # poor old python2.6 doesn't have argparse by default, which
+ # outfilter.py uses
+ is_package_installed python-argparse || install_package python-argparse
+fi
+
+# Set up logging for ``stack.sh``
+# Set ``LOGFILE`` to turn on logging
+# Append '.xxxxxxxx' to the given name to maintain history
+# where 'xxxxxxxx' is a representation of the date the file was created
+TIMESTAMP_FORMAT=${TIMESTAMP_FORMAT:-"%F-%H%M%S"}
+if [[ -n "$LOGFILE" || -n "$SCREEN_LOGDIR" ]]; then
+ LOGDAYS=${LOGDAYS:-7}
+ CURRENT_LOG_TIME=$(date "+$TIMESTAMP_FORMAT")
+fi
+
+if [[ -n "$LOGFILE" ]]; then
+ # First clean up old log files. Use the user-specified ``LOGFILE``
+ # as the template to search for, appending '.*' to match the date
+ # we added on earlier runs.
+ LOGDIR=$(dirname "$LOGFILE")
+ LOGFILENAME=$(basename "$LOGFILE")
+ mkdir -p $LOGDIR
+ find $LOGDIR -maxdepth 1 -name $LOGFILENAME.\* -mtime +$LOGDAYS -exec rm {} \;
+ LOGFILE=$LOGFILE.${CURRENT_LOG_TIME}
+ SUMFILE=$LOGFILE.${CURRENT_LOG_TIME}.summary
+
+ # Redirect output according to config
+
+ # Set fd 3 to a copy of stdout. So we can set fd 1 without losing
+ # stdout later.
+ exec 3>&1
+ if [[ "$VERBOSE" == "True" ]]; then
+ # Set fd 1 and 2 to write the log file
+ exec 1> >( $TOP_DIR/tools/outfilter.py -v -o "${LOGFILE}" ) 2>&1
+ # Set fd 6 to summary log file
+ exec 6> >( $TOP_DIR/tools/outfilter.py -o "${SUMFILE}" )
+ else
+ # Set fd 1 and 2 to primary logfile
+ exec 1> >( $TOP_DIR/tools/outfilter.py -o "${LOGFILE}" ) 2>&1
+ # Set fd 6 to summary logfile and stdout
+ exec 6> >( $TOP_DIR/tools/outfilter.py -v -o "${SUMFILE}" >&3 )
+ fi
+
+ echo_summary "stack.sh log $LOGFILE"
+ # Specified logfile name always links to the most recent log
+ ln -sf $LOGFILE $LOGDIR/$LOGFILENAME
+ ln -sf $SUMFILE $LOGDIR/$LOGFILENAME.summary
+else
+ # Set up output redirection without log files
+ # Set fd 3 to a copy of stdout. So we can set fd 1 without losing
+ # stdout later.
+ exec 3>&1
+ if [[ "$VERBOSE" != "True" ]]; then
+ # Throw away stdout and stderr
+ exec 1>/dev/null 2>&1
+ fi
+ # Always send summary fd to original stdout
+ exec 6> >( $TOP_DIR/tools/outfilter.py -v >&3 )
+fi
+
+# Set up logging of screen windows
+# Set ``SCREEN_LOGDIR`` to turn on logging of screen windows to the
+# directory specified in ``SCREEN_LOGDIR``, we will log to the the file
+# ``screen-$SERVICE_NAME-$TIMESTAMP.log`` in that dir and have a link
+# ``screen-$SERVICE_NAME.log`` to the latest log file.
+# Logs are kept for as long specified in ``LOGDAYS``.
+if [[ -n "$SCREEN_LOGDIR" ]]; then
+
+ # We make sure the directory is created.
+ if [[ -d "$SCREEN_LOGDIR" ]]; then
+ # We cleanup the old logs
+ find $SCREEN_LOGDIR -maxdepth 1 -name screen-\*.log -mtime +$LOGDAYS -exec rm {} \;
+ else
+ mkdir -p $SCREEN_LOGDIR
+ fi
+fi
+
+
+# Configure Error Traps
+# ---------------------
+
+# Kill background processes on exit
+trap exit_trap EXIT
+function exit_trap {
+ local r=$?
+ jobs=$(jobs -p)
+ # Only do the kill when we're logging through a process substitution,
+ # which currently is only to verbose logfile
+ if [[ -n $jobs && -n "$LOGFILE" && "$VERBOSE" == "True" ]]; then
+ echo "exit_trap: cleaning up child processes"
+ kill 2>&1 $jobs
+ fi
+
+ # Kill the last spinner process
+ kill_spinner
+
+ if [[ $r -ne 0 ]]; then
+ echo "Error on exit"
+ if [[ -z $LOGDIR ]]; then
+ $TOP_DIR/tools/worlddump.py
+ else
+ $TOP_DIR/tools/worlddump.py -d $LOGDIR
+ fi
+ fi
+
+ exit $r
+}
+
+# Exit on any errors so that errors don't compound
+trap err_trap ERR
+function err_trap {
+ local r=$?
+ set +o xtrace
+ if [[ -n "$LOGFILE" ]]; then
+ echo "${0##*/} failed: full log in $LOGFILE"
+ else
+ echo "${0##*/} failed"
+ fi
+ exit $r
+}
+
+# Begin trapping error exit codes
+set -o errexit
+
+# Print the commands being run so that we can see the command that triggers
+# an error. It is also useful for following along as the install occurs.
+set -o xtrace
+
+
# Common Configuration
# --------------------
@@ -325,9 +501,6 @@
# Use color for logging output (only available if syslog is not used)
LOG_COLOR=`trueorfalse True $LOG_COLOR`
-# Service startup timeout
-SERVICE_TIMEOUT=${SERVICE_TIMEOUT:-60}
-
# Reset the bundle of CA certificates
SSL_BUNDLE_FILE="$DATA_DIR/ca-bundle.pem"
rm -f $SSL_BUNDLE_FILE
@@ -340,6 +513,15 @@
# and the specified rpc backend is available on your platform.
check_rpc_backend
+# Use native SSL for servers in SSL_ENABLED_SERVICES
+USE_SSL=$(trueorfalse False $USE_SSL)
+
+# Service to enable with SSL if USE_SSL is True
+SSL_ENABLED_SERVICES="key,nova,cinder,glance,s-proxy,neutron"
+
+if is_service_enabled tls-proxy && [ "$USE_SSL" == "True" ]; then
+ die $LINENO "tls-proxy and SSL are mutually exclusive"
+fi
# Configure Projects
# ==================
@@ -494,179 +676,6 @@
fi
-# Configure logging
-# -----------------
-
-# Draw a spinner so the user knows something is happening
-function spinner {
- local delay=0.75
- local spinstr='/-\|'
- printf "..." >&3
- while [ true ]; do
- local temp=${spinstr#?}
- printf "[%c]" "$spinstr" >&3
- local spinstr=$temp${spinstr%"$temp"}
- sleep $delay
- printf "\b\b\b" >&3
- done
-}
-
-function kill_spinner {
- if [ ! -z "$LAST_SPINNER_PID" ]; then
- kill >/dev/null 2>&1 $LAST_SPINNER_PID
- printf "\b\b\bdone\n" >&3
- fi
-}
-
-# Echo text to the log file, summary log file and stdout
-# echo_summary "something to say"
-function echo_summary {
- if [[ -t 3 && "$VERBOSE" != "True" ]]; then
- kill_spinner
- echo -n -e $@ >&6
- spinner &
- LAST_SPINNER_PID=$!
- else
- echo -e $@ >&6
- fi
-}
-
-# Echo text only to stdout, no log files
-# echo_nolog "something not for the logs"
-function echo_nolog {
- echo $@ >&3
-}
-
-if [[ is_fedora && $DISTRO == "rhel6" ]]; then
- # poor old python2.6 doesn't have argparse by default, which
- # outfilter.py uses
- is_package_installed python-argparse || install_package python-argparse
-fi
-
-# Set up logging for ``stack.sh``
-# Set ``LOGFILE`` to turn on logging
-# Append '.xxxxxxxx' to the given name to maintain history
-# where 'xxxxxxxx' is a representation of the date the file was created
-TIMESTAMP_FORMAT=${TIMESTAMP_FORMAT:-"%F-%H%M%S"}
-if [[ -n "$LOGFILE" || -n "$SCREEN_LOGDIR" ]]; then
- LOGDAYS=${LOGDAYS:-7}
- CURRENT_LOG_TIME=$(date "+$TIMESTAMP_FORMAT")
-fi
-
-if [[ -n "$LOGFILE" ]]; then
- # First clean up old log files. Use the user-specified ``LOGFILE``
- # as the template to search for, appending '.*' to match the date
- # we added on earlier runs.
- LOGDIR=$(dirname "$LOGFILE")
- LOGFILENAME=$(basename "$LOGFILE")
- mkdir -p $LOGDIR
- find $LOGDIR -maxdepth 1 -name $LOGFILENAME.\* -mtime +$LOGDAYS -exec rm {} \;
- LOGFILE=$LOGFILE.${CURRENT_LOG_TIME}
- SUMFILE=$LOGFILE.${CURRENT_LOG_TIME}.summary
-
- # Redirect output according to config
-
- # Set fd 3 to a copy of stdout. So we can set fd 1 without losing
- # stdout later.
- exec 3>&1
- if [[ "$VERBOSE" == "True" ]]; then
- # Set fd 1 and 2 to write the log file
- exec 1> >( $TOP_DIR/tools/outfilter.py -v -o "${LOGFILE}" ) 2>&1
- # Set fd 6 to summary log file
- exec 6> >( $TOP_DIR/tools/outfilter.py -o "${SUMFILE}" )
- else
- # Set fd 1 and 2 to primary logfile
- exec 1> >( $TOP_DIR/tools/outfilter.py -o "${LOGFILE}" ) 2>&1
- # Set fd 6 to summary logfile and stdout
- exec 6> >( $TOP_DIR/tools/outfilter.py -v -o "${SUMFILE}" >&3 )
- fi
-
- echo_summary "stack.sh log $LOGFILE"
- # Specified logfile name always links to the most recent log
- ln -sf $LOGFILE $LOGDIR/$LOGFILENAME
- ln -sf $SUMFILE $LOGDIR/$LOGFILENAME.summary
-else
- # Set up output redirection without log files
- # Set fd 3 to a copy of stdout. So we can set fd 1 without losing
- # stdout later.
- exec 3>&1
- if [[ "$VERBOSE" != "True" ]]; then
- # Throw away stdout and stderr
- exec 1>/dev/null 2>&1
- fi
- # Always send summary fd to original stdout
- exec 6> >( $TOP_DIR/tools/outfilter.py -v >&3 )
-fi
-
-# Set up logging of screen windows
-# Set ``SCREEN_LOGDIR`` to turn on logging of screen windows to the
-# directory specified in ``SCREEN_LOGDIR``, we will log to the the file
-# ``screen-$SERVICE_NAME-$TIMESTAMP.log`` in that dir and have a link
-# ``screen-$SERVICE_NAME.log`` to the latest log file.
-# Logs are kept for as long specified in ``LOGDAYS``.
-if [[ -n "$SCREEN_LOGDIR" ]]; then
-
- # We make sure the directory is created.
- if [[ -d "$SCREEN_LOGDIR" ]]; then
- # We cleanup the old logs
- find $SCREEN_LOGDIR -maxdepth 1 -name screen-\*.log -mtime +$LOGDAYS -exec rm {} \;
- else
- mkdir -p $SCREEN_LOGDIR
- fi
-fi
-
-
-# Set Up Script Execution
-# -----------------------
-
-# Kill background processes on exit
-trap exit_trap EXIT
-function exit_trap {
- local r=$?
- jobs=$(jobs -p)
- # Only do the kill when we're logging through a process substitution,
- # which currently is only to verbose logfile
- if [[ -n $jobs && -n "$LOGFILE" && "$VERBOSE" == "True" ]]; then
- echo "exit_trap: cleaning up child processes"
- kill 2>&1 $jobs
- fi
-
- # Kill the last spinner process
- kill_spinner
-
- if [[ $r -ne 0 ]]; then
- echo "Error on exit"
- if [[ -z $LOGDIR ]]; then
- $TOP_DIR/tools/worlddump.py
- else
- $TOP_DIR/tools/worlddump.py -d $LOGDIR
- fi
- fi
-
- exit $r
-}
-
-# Exit on any errors so that errors don't compound
-trap err_trap ERR
-function err_trap {
- local r=$?
- set +o xtrace
- if [[ -n "$LOGFILE" ]]; then
- echo "${0##*/} failed: full log in $LOGFILE"
- else
- echo "${0##*/} failed"
- fi
- exit $r
-}
-
-
-set -o errexit
-
-# Print the commands being run so that we can see the command that triggers
-# an error. It is also useful for following along as the install occurs.
-set -o xtrace
-
-
# Install Packages
# ================
@@ -822,7 +831,7 @@
configure_heat
fi
-if is_service_enabled tls-proxy; then
+if is_service_enabled tls-proxy || [ "$USE_SSL" == "True" ]; then
configure_CA
init_CA
init_cert
@@ -988,7 +997,7 @@
create_swift_accounts
fi
- if is_service_enabled heat; then
+ if is_service_enabled heat && [[ "$HEAT_STANDALONE" != "True" ]]; then
create_heat_accounts
fi
@@ -1289,6 +1298,10 @@
USERRC_PARAMS="$USERRC_PARAMS --os-cacert $SSL_BUNDLE_FILE"
fi
+ if [[ "$HEAT_STANDALONE" = "True" ]]; then
+ USERRC_PARAMS="$USERRC_PARAMS --heat-url http://$HEAT_API_HOST:$HEAT_API_PORT/v1"
+ fi
+
$TOP_DIR/tools/create_userrc.sh $USERRC_PARAMS
fi
diff --git a/stackrc b/stackrc
index 580fabf..af45c77 100644
--- a/stackrc
+++ b/stackrc
@@ -3,6 +3,9 @@
# Find the other rc files
RC_DIR=$(cd $(dirname "${BASH_SOURCE:-$0}") && pwd)
+# Source required devstack functions and globals
+source $RC_DIR/functions
+
# Destination path for installation
DEST=/opt/stack
@@ -120,45 +123,205 @@
# Another option is http://review.openstack.org/p
GIT_BASE=${GIT_BASE:-git://git.openstack.org}
+##############
+#
+# OpenStack Server Components
+#
+##############
+
# metering service
CEILOMETER_REPO=${CEILOMETER_REPO:-${GIT_BASE}/openstack/ceilometer.git}
CEILOMETER_BRANCH=${CEILOMETER_BRANCH:-master}
-# ceilometer client library
-CEILOMETERCLIENT_REPO=${CEILOMETERCLIENT_REPO:-${GIT_BASE}/openstack/python-ceilometerclient.git}
-CEILOMETERCLIENT_BRANCH=${CEILOMETERCLIENT_BRANCH:-master}
-
# volume service
CINDER_REPO=${CINDER_REPO:-${GIT_BASE}/openstack/cinder.git}
CINDER_BRANCH=${CINDER_BRANCH:-master}
-# volume client
-CINDERCLIENT_REPO=${CINDERCLIENT_REPO:-${GIT_BASE}/openstack/python-cinderclient.git}
-CINDERCLIENT_BRANCH=${CINDERCLIENT_BRANCH:-master}
-
-# diskimage-builder
-DIB_REPO=${DIB_REPO:-${GIT_BASE}/openstack/diskimage-builder.git}
-DIB_BRANCH=${DIB_BRANCH:-master}
-
# image catalog service
GLANCE_REPO=${GLANCE_REPO:-${GIT_BASE}/openstack/glance.git}
GLANCE_BRANCH=${GLANCE_BRANCH:-master}
-GLANCE_STORE_REPO=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
-GLANCE_STORE_BRANCH=${GLANCE_STORE_BRANCH:-master}
-
-# python glance client library
-GLANCECLIENT_REPO=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
-GLANCECLIENT_BRANCH=${GLANCECLIENT_BRANCH:-master}
-
# heat service
HEAT_REPO=${HEAT_REPO:-${GIT_BASE}/openstack/heat.git}
HEAT_BRANCH=${HEAT_BRANCH:-master}
+# django powered web control panel for openstack
+HORIZON_REPO=${HORIZON_REPO:-${GIT_BASE}/openstack/horizon.git}
+HORIZON_BRANCH=${HORIZON_BRANCH:-master}
+
+# baremetal provisioning service
+IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
+IRONIC_BRANCH=${IRONIC_BRANCH:-master}
+
+# unified auth system (manages accounts/tokens)
+KEYSTONE_REPO=${KEYSTONE_REPO:-${GIT_BASE}/openstack/keystone.git}
+KEYSTONE_BRANCH=${KEYSTONE_BRANCH:-master}
+
+# neutron service
+NEUTRON_REPO=${NEUTRON_REPO:-${GIT_BASE}/openstack/neutron.git}
+NEUTRON_BRANCH=${NEUTRON_BRANCH:-master}
+
+# compute service
+NOVA_REPO=${NOVA_REPO:-${GIT_BASE}/openstack/nova.git}
+NOVA_BRANCH=${NOVA_BRANCH:-master}
+
+# storage service
+SWIFT_REPO=${SWIFT_REPO:-${GIT_BASE}/openstack/swift.git}
+SWIFT_BRANCH=${SWIFT_BRANCH:-master}
+
+# trove service
+TROVE_REPO=${TROVE_REPO:-${GIT_BASE}/openstack/trove.git}
+TROVE_BRANCH=${TROVE_BRANCH:-master}
+
+##############
+#
+# Testing Components
+#
+##############
+
+# consolidated openstack requirements
+REQUIREMENTS_REPO=${REQUIREMENTS_REPO:-${GIT_BASE}/openstack/requirements.git}
+REQUIREMENTS_BRANCH=${REQUIREMENTS_BRANCH:-master}
+
+# Tempest test suite
+TEMPEST_REPO=${TEMPEST_REPO:-${GIT_BASE}/openstack/tempest.git}
+TEMPEST_BRANCH=${TEMPEST_BRANCH:-master}
+
+# TODO(sdague): this should end up as a library component like below
+TEMPEST_LIB_REPO=${TEMPEST_LIB_REPO:-${GIT_BASE}/openstack/tempest-lib.git}
+TEMPEST_LIB_BRANCH=${TEMPEST_LIB_BRANCH:-master}
+
+
+##############
+#
+# OpenStack Client Library Componets
+#
+##############
+
+# ceilometer client library
+CEILOMETERCLIENT_REPO=${CEILOMETERCLIENT_REPO:-${GIT_BASE}/openstack/python-ceilometerclient.git}
+CEILOMETERCLIENT_BRANCH=${CEILOMETERCLIENT_BRANCH:-master}
+
+# volume client
+CINDERCLIENT_REPO=${CINDERCLIENT_REPO:-${GIT_BASE}/openstack/python-cinderclient.git}
+CINDERCLIENT_BRANCH=${CINDERCLIENT_BRANCH:-master}
+
+# python glance client library
+GLANCECLIENT_REPO=${GLANCECLIENT_REPO:-${GIT_BASE}/openstack/python-glanceclient.git}
+GLANCECLIENT_BRANCH=${GLANCECLIENT_BRANCH:-master}
+
# python heat client library
HEATCLIENT_REPO=${HEATCLIENT_REPO:-${GIT_BASE}/openstack/python-heatclient.git}
HEATCLIENT_BRANCH=${HEATCLIENT_BRANCH:-master}
+# ironic client
+IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
+IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
+
+# python keystone client library to nova that horizon uses
+KEYSTONECLIENT_REPO=${KEYSTONECLIENT_REPO:-${GIT_BASE}/openstack/python-keystoneclient.git}
+KEYSTONECLIENT_BRANCH=${KEYSTONECLIENT_BRANCH:-master}
+
+# neutron client
+NEUTRONCLIENT_REPO=${NEUTRONCLIENT_REPO:-${GIT_BASE}/openstack/python-neutronclient.git}
+NEUTRONCLIENT_BRANCH=${NEUTRONCLIENT_BRANCH:-master}
+
+# python client library to nova that horizon (and others) use
+NOVACLIENT_REPO=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
+NOVACLIENT_BRANCH=${NOVACLIENT_BRANCH:-master}
+
+# python swift client library
+SWIFTCLIENT_REPO=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
+SWIFTCLIENT_BRANCH=${SWIFTCLIENT_BRANCH:-master}
+
+# trove client library test
+TROVECLIENT_REPO=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
+TROVECLIENT_BRANCH=${TROVECLIENT_BRANCH:-master}
+
+# consolidated openstack python client
+OPENSTACKCLIENT_REPO=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
+OPENSTACKCLIENT_BRANCH=${OPENSTACKCLIENT_BRANCH:-master}
+
+###################
+#
+# Oslo Libraries
+#
+###################
+
+# cliff command line framework
+GITREPO["cliff"]=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
+GITBRANCH["cliff"]=${CLIFF_BRANCH:-master}
+
+# oslo.concurrency
+GITREPO["oslo.concurrency"]=${OSLOCON_REPO:-${GIT_BASE}/openstack/oslo.concurrency.git}
+GITBRANCH["oslo.concurrency"]=${OSLOCON_BRANCH:-master}
+
+# oslo.config
+GITREPO["oslo.config"]=${OSLOCFG_REPO:-${GIT_BASE}/openstack/oslo.config.git}
+GITBRANCH["oslo.config"]=${OSLOCFG_BRANCH:-master}
+
+# oslo.db
+GITREPO["oslo.db"]=${OSLODB_REPO:-${GIT_BASE}/openstack/oslo.db.git}
+GITBRANCH["oslo.db"]=${OSLODB_BRANCH:-master}
+
+# oslo.i18n
+GITREPO["oslo.i18n"]=${OSLOI18N_REPO:-${GIT_BASE}/openstack/oslo.i18n.git}
+GITBRANCH["oslo.i18n"]=${OSLOI18N_BRANCH:-master}
+
+# oslo.log
+GITREPO["oslo.log"]=${OSLOLOG_REPO:-${GIT_BASE}/openstack/oslo.log.git}
+GITBRANCH["oslo.log"]=${OSLOLOG_BRANCH:-master}
+
+# oslo.messaging
+GITREPO["oslo.messaging"]=${OSLOMSG_REPO:-${GIT_BASE}/openstack/oslo.messaging.git}
+GITBRANCH["oslo.messaging"]=${OSLOMSG_BRANCH:-master}
+
+# oslo.middleware
+GITREPO["oslo.middleware"]=${OSLOMID_REPO:-${GIT_BASE}/openstack/oslo.middleware.git}
+GITBRANCH["oslo.middleware"]=${OSLOMID_BRANCH:-master}
+
+# oslo.rootwrap
+GITREPO["oslo.rootwrap"]=${OSLORWRAP_REPO:-${GIT_BASE}/openstack/oslo.rootwrap.git}
+GITBRANCH["oslo.rootwrap"]=${OSLORWRAP_BRANCH:-master}
+
+# oslo.serialization
+GITREPO["oslo.serialization"]=${OSLOSERIALIZATION_REPO:-${GIT_BASE}/openstack/oslo.serialization.git}
+GITBRANCH["oslo.serialization"]=${OSLOSERIALIZATION_BRANCH:-master}
+
+# oslo.utils
+GITREPO["oslo.utils"]=${OSLOUTILS_REPO:-${GIT_BASE}/openstack/oslo.utils.git}
+GITBRANCH["oslo.utils"]=${OSLOUTILS_BRANCH:-master}
+
+# oslo.vmware
+GITREPO["oslo.vmware"]=${OSLOVMWARE_REPO:-${GIT_BASE}/openstack/oslo.vmware.git}
+GITBRANCH["oslo.vmware"]=${OSLOVMWARE_BRANCH:-master}
+
+# pycadf auditing library
+GITREPO["pycadf"]=${PYCADF_REPO:-${GIT_BASE}/openstack/pycadf.git}
+GITBRANCH["pycadf"]=${PYCADF_BRANCH:-master}
+
+# stevedore plugin manager
+GITREPO["stevedore"]=${STEVEDORE_REPO:-${GIT_BASE}/openstack/stevedore.git}
+GITBRANCH["stevedore"]=${STEVEDORE_BRANCH:-master}
+
+# taskflow plugin manager
+GITREPO["taskflow"]=${TASKFLOW_REPO:-${GIT_BASE}/openstack/taskflow.git}
+GITBRANCH["taskflow"]=${TASKFLOW_BRANCH:-master}
+
+# pbr drives the setuptools configs
+GITREPO["pbr"]=${PBR_REPO:-${GIT_BASE}/openstack-dev/pbr.git}
+GITBRANCH["pbr"]=${PBR_BRANCH:-master}
+
+##################
+#
+# Libraries managed by OpenStack programs (non oslo)
+#
+##################
+
+# glance store library
+GLANCE_STORE_REPO=${GLANCE_STORE_REPO:-${GIT_BASE}/openstack/glance_store.git}
+GLANCE_STORE_BRANCH=${GLANCE_STORE_BRANCH:-master}
+
# heat-cfntools server agent
HEAT_CFNTOOLS_REPO=${HEAT_CFNTOOLS_REPO:-${GIT_BASE}/openstack/heat-cfntools.git}
HEAT_CFNTOOLS_BRANCH=${HEAT_CFNTOOLS_BRANCH:-master}
@@ -167,43 +330,28 @@
HEAT_TEMPLATES_REPO=${HEAT_TEMPLATES_REPO:-${GIT_BASE}/openstack/heat-templates.git}
HEAT_TEMPLATES_BRANCH=${HEAT_TEMPLATES_BRANCH:-master}
-# django powered web control panel for openstack
-HORIZON_REPO=${HORIZON_REPO:-${GIT_BASE}/openstack/horizon.git}
-HORIZON_BRANCH=${HORIZON_BRANCH:-master}
-
# django openstack_auth library
HORIZONAUTH_REPO=${HORIZONAUTH_REPO:-${GIT_BASE}/openstack/django_openstack_auth.git}
HORIZONAUTH_BRANCH=${HORIZONAUTH_BRANCH:-master}
-# baremetal provisioning service
-IRONIC_REPO=${IRONIC_REPO:-${GIT_BASE}/openstack/ironic.git}
-IRONIC_BRANCH=${IRONIC_BRANCH:-master}
-IRONIC_PYTHON_AGENT_REPO=${IRONIC_PYTHON_AGENT_REPO:-${GIT_BASE}/openstack/ironic-python-agent.git}
-IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-master}
-
-# ironic client
-IRONICCLIENT_REPO=${IRONICCLIENT_REPO:-${GIT_BASE}/openstack/python-ironicclient.git}
-IRONICCLIENT_BRANCH=${IRONICCLIENT_BRANCH:-master}
-
-# unified auth system (manages accounts/tokens)
-KEYSTONE_REPO=${KEYSTONE_REPO:-${GIT_BASE}/openstack/keystone.git}
-KEYSTONE_BRANCH=${KEYSTONE_BRANCH:-master}
-
-# python keystone client library to nova that horizon uses
-KEYSTONECLIENT_REPO=${KEYSTONECLIENT_REPO:-${GIT_BASE}/openstack/python-keystoneclient.git}
-KEYSTONECLIENT_BRANCH=${KEYSTONECLIENT_BRANCH:-master}
-
# keystone middleware
KEYSTONEMIDDLEWARE_REPO=${KEYSTONEMIDDLEWARE_REPO:-${GIT_BASE}/openstack/keystonemiddleware.git}
KEYSTONEMIDDLEWARE_BRANCH=${KEYSTONEMIDDLEWARE_BRANCH:-master}
-# compute service
-NOVA_REPO=${NOVA_REPO:-${GIT_BASE}/openstack/nova.git}
-NOVA_BRANCH=${NOVA_BRANCH:-master}
+# s3 support for swift
+SWIFT3_REPO=${SWIFT3_REPO:-${GIT_BASE}/stackforge/swift3.git}
+SWIFT3_BRANCH=${SWIFT3_BRANCH:-master}
-# python client library to nova that horizon (and others) use
-NOVACLIENT_REPO=${NOVACLIENT_REPO:-${GIT_BASE}/openstack/python-novaclient.git}
-NOVACLIENT_BRANCH=${NOVACLIENT_BRANCH:-master}
+
+##################
+#
+# TripleO Components
+#
+##################
+
+# diskimage-builder
+DIB_REPO=${DIB_REPO:-${GIT_BASE}/openstack/diskimage-builder.git}
+DIB_BRANCH=${DIB_BRANCH:-master}
# os-apply-config configuration template tool
OAC_REPO=${OAC_REPO:-${GIT_BASE}/openstack/os-apply-config.git}
@@ -213,130 +361,19 @@
OCC_REPO=${OCC_REPO:-${GIT_BASE}/openstack/os-collect-config.git}
OCC_BRANCH=${OCC_BRANCH:-master}
-# consolidated openstack python client
-OPENSTACKCLIENT_REPO=${OPENSTACKCLIENT_REPO:-${GIT_BASE}/openstack/python-openstackclient.git}
-OPENSTACKCLIENT_BRANCH=${OPENSTACKCLIENT_BRANCH:-master}
-
# os-refresh-config configuration run-parts tool
ORC_REPO=${ORC_REPO:-${GIT_BASE}/openstack/os-refresh-config.git}
ORC_BRANCH=${ORC_BRANCH:-master}
-# cliff command line framework
-CLIFF_REPO=${CLIFF_REPO:-${GIT_BASE}/openstack/cliff.git}
-CLIFF_BRANCH=${CLIFF_BRANCH:-master}
-
-# oslo.concurrency
-OSLOCON_REPO=${OSLOCON_REPO:-${GIT_BASE}/openstack/oslo.concurrency.git}
-OSLOCON_BRANCH=${OSLOCON_BRANCH:-master}
-
-# oslo.config
-OSLOCFG_REPO=${OSLOCFG_REPO:-${GIT_BASE}/openstack/oslo.config.git}
-OSLOCFG_BRANCH=${OSLOCFG_BRANCH:-master}
-
-# oslo.db
-OSLODB_REPO=${OSLODB_REPO:-${GIT_BASE}/openstack/oslo.db.git}
-OSLODB_BRANCH=${OSLODB_BRANCH:-master}
-
-# oslo.i18n
-OSLOI18N_REPO=${OSLOI18N_REPO:-${GIT_BASE}/openstack/oslo.i18n.git}
-OSLOI18N_BRANCH=${OSLOI18N_BRANCH:-master}
-
-# oslo.log
-OSLOLOG_REPO=${OSLOLOG_REPO:-${GIT_BASE}/openstack/oslo.log.git}
-OSLOLOG_BRANCH=${OSLOLOG_BRANCH:-master}
-
-# oslo.messaging
-OSLOMSG_REPO=${OSLOMSG_REPO:-${GIT_BASE}/openstack/oslo.messaging.git}
-OSLOMSG_BRANCH=${OSLOMSG_BRANCH:-master}
-
-# oslo.middleware
-OSLOMID_REPO=${OSLOMID_REPO:-${GIT_BASE}/openstack/oslo.middleware.git}
-OSLOMID_BRANCH=${OSLOMID_BRANCH:-master}
-
-# oslo.rootwrap
-OSLORWRAP_REPO=${OSLORWRAP_REPO:-${GIT_BASE}/openstack/oslo.rootwrap.git}
-OSLORWRAP_BRANCH=${OSLORWRAP_BRANCH:-master}
-
-# oslo.serialization
-OSLOSERIALIZATION_REPO=${OSLOSERIALIZATION_REPO:-${GIT_BASE}/openstack/oslo.serialization.git}
-OSLOSERIALIZATION_BRANCH=${OSLOSERIALIZATION_BRANCH:-master}
-
-# oslo.utils
-OSLOUTILS_REPO=${OSLOUTILS_REPO:-${GIT_BASE}/openstack/oslo.utils.git}
-OSLOUTILS_BRANCH=${OSLOUTILS_BRANCH:-master}
-
-# oslo.vmware
-OSLOVMWARE_REPO=${OSLOVMWARE_REPO:-${GIT_BASE}/openstack/oslo.vmware.git}
-OSLOVMWARE_BRANCH=${OSLOVMWARE_BRANCH:-master}
-
-# pycadf auditing library
-PYCADF_REPO=${PYCADF_REPO:-${GIT_BASE}/openstack/pycadf.git}
-PYCADF_BRANCH=${PYCADF_BRANCH:-master}
-
-# stevedore plugin manager
-STEVEDORE_REPO=${STEVEDORE_REPO:-${GIT_BASE}/openstack/stevedore.git}
-STEVEDORE_BRANCH=${STEVEDORE_BRANCH:-master}
-
-# taskflow plugin manager
-TASKFLOW_REPO=${TASKFLOW_REPO:-${GIT_BASE}/openstack/taskflow.git}
-TASKFLOW_BRANCH=${TASKFLOW_BRANCH:-master}
-
-# pbr drives the setuptools configs
-PBR_REPO=${PBR_REPO:-${GIT_BASE}/openstack-dev/pbr.git}
-PBR_BRANCH=${PBR_BRANCH:-master}
-
-# neutron service
-NEUTRON_REPO=${NEUTRON_REPO:-${GIT_BASE}/openstack/neutron.git}
-NEUTRON_BRANCH=${NEUTRON_BRANCH:-master}
-
-# neutron client
-NEUTRONCLIENT_REPO=${NEUTRONCLIENT_REPO:-${GIT_BASE}/openstack/python-neutronclient.git}
-NEUTRONCLIENT_BRANCH=${NEUTRONCLIENT_BRANCH:-master}
-
-# consolidated openstack requirements
-REQUIREMENTS_REPO=${REQUIREMENTS_REPO:-${GIT_BASE}/openstack/requirements.git}
-REQUIREMENTS_BRANCH=${REQUIREMENTS_BRANCH:-master}
-
-# storage service
-SWIFT_REPO=${SWIFT_REPO:-${GIT_BASE}/openstack/swift.git}
-SWIFT_BRANCH=${SWIFT_BRANCH:-master}
-SWIFT3_REPO=${SWIFT3_REPO:-${GIT_BASE}/stackforge/swift3.git}
-SWIFT3_BRANCH=${SWIFT3_BRANCH:-master}
-
-# python swift client library
-SWIFTCLIENT_REPO=${SWIFTCLIENT_REPO:-${GIT_BASE}/openstack/python-swiftclient.git}
-SWIFTCLIENT_BRANCH=${SWIFTCLIENT_BRANCH:-master}
-
-# Tempest test suite
-TEMPEST_REPO=${TEMPEST_REPO:-${GIT_BASE}/openstack/tempest.git}
-TEMPEST_BRANCH=${TEMPEST_BRANCH:-master}
-
-TEMPEST_LIB_REPO=${TEMPEST_LIB_REPO:-${GIT_BASE}/openstack/tempest-lib.git}
-TEMPEST_LIB_BRANCH=${TEMPEST_LIB_BRANCH:-master}
-
# Tripleo elements for diskimage-builder images
TIE_REPO=${TIE_REPO:-${GIT_BASE}/openstack/tripleo-image-elements.git}
TIE_BRANCH=${TIE_BRANCH:-master}
-# a websockets/html5 or flash powered VNC console for vm instances
-NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
-NOVNC_BRANCH=${NOVNC_BRANCH:-master}
-
-# ryu service
-RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
-RYU_BRANCH=${RYU_BRANCH:-master}
-
-# a websockets/html5 or flash powered SPICE console for vm instances
-SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
-SPICE_BRANCH=${SPICE_BRANCH:-master}
-
-# trove service
-TROVE_REPO=${TROVE_REPO:-${GIT_BASE}/openstack/trove.git}
-TROVE_BRANCH=${TROVE_BRANCH:-master}
-
-# trove client library test
-TROVECLIENT_REPO=${TROVECLIENT_REPO:-${GIT_BASE}/openstack/python-troveclient.git}
-TROVECLIENT_BRANCH=${TROVECLIENT_BRANCH:-master}
+#################
+#
+# Additional Libraries
+#
+#################
# stackforge libraries that are used by OpenStack core services
# wsme
@@ -352,6 +389,32 @@
SQLALCHEMY_MIGRATE_BRANCH=${SQLALCHEMY_MIGRATE_BRANCH:-master}
+#################
+#
+# 3rd Party Components (non pip installable)
+#
+# NOTE(sdague): these should be converted to release version installs or removed
+#
+#################
+
+# ironic python agent
+IRONIC_PYTHON_AGENT_REPO=${IRONIC_PYTHON_AGENT_REPO:-${GIT_BASE}/openstack/ironic-python-agent.git}
+IRONIC_PYTHON_AGENT_BRANCH=${IRONIC_PYTHON_AGENT_BRANCH:-master}
+
+# a websockets/html5 or flash powered VNC console for vm instances
+NOVNC_REPO=${NOVNC_REPO:-https://github.com/kanaka/noVNC.git}
+NOVNC_BRANCH=${NOVNC_BRANCH:-master}
+
+# ryu service
+RYU_REPO=${RYU_REPO:-https://github.com/osrg/ryu.git}
+RYU_BRANCH=${RYU_BRANCH:-master}
+
+# a websockets/html5 or flash powered SPICE console for vm instances
+SPICE_REPO=${SPICE_REPO:-http://anongit.freedesktop.org/git/spice/spice-html5.git}
+SPICE_BRANCH=${SPICE_BRANCH:-master}
+
+
+
# Nova hypervisor configuration. We default to libvirt with **kvm** but will
# drop back to **qemu** if we are unable to load the kvm module. ``stack.sh`` can
# also install an **LXC**, **OpenVZ** or **XenAPI** based system. If xenserver-core
@@ -517,6 +580,11 @@
# Also sets the minimum number of workers to 2.
API_WORKERS=${API_WORKERS:=$(( ($(nproc)/2)<2 ? 2 : ($(nproc)/2) ))}
+# Service startup timeout
+SERVICE_TIMEOUT=${SERVICE_TIMEOUT:-60}
+
+# Following entries need to be last items in file
+
# Local variables:
# mode: shell-script
# End:
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
index e999eff..96bd892 100755
--- a/tools/build_docs.sh
+++ b/tools/build_docs.sh
@@ -17,7 +17,7 @@
## prevent stray files in the workspace being added tot he docs)
## -o <out-dir> Write the static HTML output to <out-dir>
## (Note that <out-dir> will be deleted and re-created to ensure it is clean)
-## -g Update the old gh-pages repo (set PUSH=1 to actualy push up to RCB)
+## -g Update the old gh-pages repo (set PUSH=1 to actually push up to RCB)
# Defaults
# --------
diff --git a/tools/create_userrc.sh b/tools/create_userrc.sh
index 5b1111a..863fe03 100755
--- a/tools/create_userrc.sh
+++ b/tools/create_userrc.sh
@@ -37,6 +37,7 @@
-C <tenant_name> create user and tenant, the specifid tenant will be the user's tenant
-r <name> when combined with -C and the (-u) user exists it will be the user's tenant role in the (-C)tenant (default: Member)
-p <userpass> password for the user
+--heat-url <heat_url>
--os-username <username>
--os-password <admin password>
--os-tenant-name <tenant_name>
@@ -53,12 +54,13 @@
EOF
}
-if ! options=$(getopt -o hPAp:u:r:C: -l os-username:,os-password:,os-tenant-name:,os-tenant-id:,os-auth-url:,target-dir:,skip-tenant:,os-cacert:,help,debug -- "$@"); then
+if ! options=$(getopt -o hPAp:u:r:C: -l os-username:,os-password:,os-tenant-name:,os-tenant-id:,os-auth-url:,target-dir:,heat-url:,skip-tenant:,os-cacert:,help,debug -- "$@"); then
display_help
exit 1
fi
eval set -- $options
ADDPASS=""
+HEAT_URL=""
# The services users usually in the service tenant.
# rc files for service users, is out of scope.
@@ -79,6 +81,7 @@
--os-auth-url) export OS_AUTH_URL=$2; shift ;;
--os-cacert) export OS_CACERT=$2; shift ;;
--target-dir) ACCOUNT_DIR=$2; shift ;;
+ --heat-url) HEAT_URL=$2; shift ;;
--debug) set -o xtrace ;;
-u) MODE=${MODE:-one}; USER_NAME=$2; shift ;;
-p) USER_PASS=$2; shift ;;
@@ -209,6 +212,10 @@
if [ -n "$ADDPASS" ]; then
echo "export OS_PASSWORD=\"$user_passwd\"" >>"$rcfile"
fi
+ if [ -n "$HEAT_URL" ]; then
+ echo "export HEAT_URL=\"$HEAT_URL/$tenant_id\"" >>"$rcfile"
+ echo "export OS_NO_CLIENT_AUTH=True" >>"$rcfile"
+ fi
}
#admin users expected
diff --git a/tools/xen/devstackubuntupreseed.cfg b/tools/xen/devstackubuntupreseed.cfg
index 6a1ae89..94e6e96 100644
--- a/tools/xen/devstackubuntupreseed.cfg
+++ b/tools/xen/devstackubuntupreseed.cfg
@@ -297,9 +297,9 @@
### Apt setup
# You can choose to install restricted and universe software, or to install
# software from the backports repository.
-#d-i apt-setup/restricted boolean true
-#d-i apt-setup/universe boolean true
-#d-i apt-setup/backports boolean true
+d-i apt-setup/restricted boolean true
+d-i apt-setup/universe boolean true
+d-i apt-setup/backports boolean true
# Uncomment this if you don't want to use a network mirror.
#d-i apt-setup/use_mirror boolean false
# Select which update services to use; define the mirrors to be used.
@@ -366,7 +366,7 @@
# With a few exceptions for unusual partitioning setups, GRUB 2 is now the
# default. If you need GRUB Legacy for some particular reason, then
# uncomment this:
-#d-i grub-installer/grub2_instead_of_grub_legacy boolean false
+d-i grub-installer/grub2_instead_of_grub_legacy boolean false
# This is fairly safe to set, it makes grub install automatically to the MBR
# if no other operating system is detected on the machine.
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 12e861e..75d56a8 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -171,6 +171,7 @@
echo "Waiting for the VM to halt. Progress in-VM can be checked with vncviewer:"
mgmt_ip=$(echo $XENAPI_CONNECTION_URL | tr -d -c '1234567890.')
domid=$(get_domid "$GUEST_NAME")
+ sleep 20 # Wait for the vnc-port to be written
port=$(xenstore-read /local/domain/$domid/console/vnc-port)
echo "vncviewer -via root@$mgmt_ip localhost:${port:2}"
while true; do
@@ -393,7 +394,7 @@
# Watch devstack's output (which doesn't start until stack.sh is running,
# but wait for run.sh (which starts stack.sh) to exit as that is what
- # hopefully writes the succeded cookie.
+ # hopefully writes the succeeded cookie.
pid=`ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS pgrep run.sh`
ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "tail --pid $pid -n +1 -f /tmp/devstack/log/stack.log"
diff --git a/tools/xen/prepare_guest.sh b/tools/xen/prepare_guest.sh
index 2b5e418..cd189db 100755
--- a/tools/xen/prepare_guest.sh
+++ b/tools/xen/prepare_guest.sh
@@ -74,6 +74,7 @@
apt-get update
apt-get install -y cracklib-runtime curl wget ssh openssh-server tcpdump ethtool
apt-get install -y curl wget ssh openssh-server python-pip git sudo python-netaddr
+apt-get install -y coreutils
pip install xenapi
# Install XenServer guest utilities
diff --git a/tools/xen/xenrc b/tools/xen/xenrc
index 278bb9b..510c5f9 100644
--- a/tools/xen/xenrc
+++ b/tools/xen/xenrc
@@ -63,15 +63,15 @@
PUB_NETMASK=${PUB_NETMASK:-255.255.255.0}
# Ubuntu install settings
-UBUNTU_INST_RELEASE="saucy"
-UBUNTU_INST_TEMPLATE_NAME="Ubuntu 13.10 (64-bit) for DevStack"
+UBUNTU_INST_RELEASE="trusty"
+UBUNTU_INST_TEMPLATE_NAME="Ubuntu 14.04 (64-bit) for DevStack"
# For 12.04 use "precise" and update template name
# However, for 12.04, you should be using
# XenServer 6.1 and later or XCP 1.6 or later
# 11.10 is only really supported with XenServer 6.0.2 and later
UBUNTU_INST_ARCH="amd64"
-UBUNTU_INST_HTTP_HOSTNAME="archive.ubuntu.net"
-UBUNTU_INST_HTTP_DIRECTORY="/ubuntu"
+UBUNTU_INST_HTTP_HOSTNAME="mirror.anl.gov"
+UBUNTU_INST_HTTP_DIRECTORY="/pub/ubuntu"
UBUNTU_INST_HTTP_PROXY=""
UBUNTU_INST_LOCALE="en_US"
UBUNTU_INST_KEYBOARD="us"
diff --git a/unstack.sh b/unstack.sh
index adb6dc1..fee608e 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -173,5 +173,3 @@
screen -X -S $SESSION quit
fi
fi
-
-cleanup_tmp