Merge "Revert "cleanup potentially installed older oslo.config""
diff --git a/HACKING.rst b/HACKING.rst
index 83455e3..d69bb49 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -17,7 +17,7 @@
Contributing code to DevStack follows the usual OpenStack process as described
in `How To Contribute`__ in the OpenStack wiki. `DevStack's LaunchPad project`__
-contains the usual links for blueprints, bugs, tec.
+contains the usual links for blueprints, bugs, etc.
__ contribute_
.. _contribute: http://wiki.openstack.org/HowToContribute
diff --git a/README.md b/README.md
index f0406c6..7eacebd 100644
--- a/README.md
+++ b/README.md
@@ -133,11 +133,25 @@
# Apache Frontend
-Apache web server is enabled for wsgi services by setting
-`APACHE_ENABLED_SERVICES` in your ``localrc`` section. Remember to
-enable these services at first as above.
+Apache web server can be enabled for wsgi services that support being deployed
+under HTTPD + mod_wsgi. By default, services that recommend running under
+HTTPD + mod_wsgi are deployed under Apache. To use an alternative deployment
+strategy (e.g. eventlet) for services that support an alternative to HTTPD +
+mod_wsgi set ``ENABLE_HTTPD_MOD_WSGI_SERVICES`` to ``False`` in your
+``local.conf``.
- APACHE_ENABLED_SERVICES+=key,swift
+Each service that can be run under HTTPD + mod_wsgi also has an override
+toggle available that can be set in your ``local.conf``.
+
+Keystone is run under HTTPD + mod_wsgi by default.
+
+Example (Keystone):
+
+ KEYSTONE_USE_MOD_WSGI="True"
+
+Example (Swift):
+
+ SWIFT_USE_MOD_WSGI="True"
# Swift
@@ -330,6 +344,25 @@
Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST
+# Multi-Region Setup
+
+We want to setup two devstack (RegionOne and RegionTwo) with shared keystone
+(same users and services) and horizon.
+Keystone and Horizon will be located in RegionOne.
+Full spec is available at:
+https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat.
+
+In RegionOne:
+
+ REGION_NAME=RegionOne
+
+In RegionTwo:
+
+ disable_service horizon
+ KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
+ KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE>
+ REGION_NAME=RegionTwo
+
# Cells
Cells is a new scaling option with a full spec at:
diff --git a/docs/source/assets/images/quickstart.png b/docs/source/assets/images/quickstart.png
index 5f01bac..5400a6f 100644
--- a/docs/source/assets/images/quickstart.png
+++ b/docs/source/assets/images/quickstart.png
Binary files differ
diff --git a/docs/source/changes.html b/docs/source/changes.html
index e4aef60..966b0c9 100644
--- a/docs/source/changes.html
+++ b/docs/source/changes.html
@@ -47,461 +47,16 @@
<div class='row pull-left'>
<h2>Recent Changes <small>What's been happening?</small></h2>
- <p>This is an incomplete list of recent changes to DevStack. For the complete list see <a href="https://review.openstack.org/#/q/status:merged+project:openstack-dev/devstack,n,z">the DevStack project in Gerrit</a>.</p>
- <dl class='pull-left'>
+ <p>These are the commits to DevStack for the last six months. For the complete list see <a href="https://review.openstack.org/#/q/status:merged+project:openstack-dev/devstack,n,z">the DevStack project in Gerrit</a>.</p>
+
+ <ul class='pull-left'>
<!--
- <dt></dt>
- <dd> <em>Commit <a href="https://review.openstack.org/x">x</a> merged dd_mmm_yyyy</em></dd>
+ This list is generated by:
+ git log --pretty=format:' <li>%s - <em>Commit <a href="https://review.openstack.org/#q,%h,n,z">%h</a> %cd</em></li>' --date=short --since 2014-01-01 | grep -v Merge
-->
- <dt>Install libguestfs with Nova Compute</dt>
- <dd>Add for Ubuntu installations; this eliminates use of NBD. <em>Commit <a href="https://review.openstack.org/70237">70237</a> merged 31 Jan 2014</em></dd>
- <dt>Fix Tempest config settings</dt>
- <dd>Cleans up a number of configuration issues between DevStack and Tempest. <em>Commit <a href="https://review.openstack.org/68532">68532</a> merged 31 Jan 2014</em></dd>
- <dt>Merge Gantt support</dt>
- <dd>Gantt support is added to the repo as a plugin. <em>Commit <a href="https://review.openstack.org/67666">67666</a> merged 31 Jan 2014</em></dd>
- <dt>Set Keystone admin_bind_host</dt>
- <dd>This works around an odd problem with Keystone's use of port 35357 and its use as an ephemeral port. <em>Commit <a href="https://review.openstack.org/57577">57577</a> merged 30 Jan 2014</em></dd>
- <dt>Generate Tempest service list</dt>
- <dd>This eliminates the manual maintenance of the services Tempest checks for. <em>Commit <a href="https://review.openstack.org/70015">70015</a> merged 30 Jan 2014</em></dd>
- <dt>Fix stop_swift()</dt>
- <dd>Kill all swift processes correctly with pkill. <em>Commit <a href="https://review.openstack.org/69440">69440</a> merged 28 Jan 2014</em></dd>
- <dt>Update Cinder cert script to use run_tempest</dt>
- <dd>Update following changes to Tempest's run_tests.sh script. <em>Commit <a href="https://review.openstack.org/66904">66904</a> merged 25 Jan 2014</em></dd>
- <dt>Add missing mongodb client</dt>
- <dd>The mongodb client was skipped on Fedora. <em>Commit <a href="https://review.openstack.org/65147">65147</a> merged 25 Jan 2014</em></dd>
- <dt>Fix reverting local changes in repo</dt>
- <dd>Work around a bug in <code>git diff --quiet</code> used in reverting the repo changes during global requirements handling. <em>Commit <a href="https://review.openstack.org/68546">68546</a> merged 25 Jan 2014</em></dd>
- <dt>Add cirros 3.0 image for Xenserver</dt>
- <dd>Xenserver wants the cirros 3.0 vhd image <em>Commit <a href="https://review.openstack.org/68136">68136</a> merged 25 Jan 2014</em></dd>
- <dt>Tweak upload_image.sh for .vmdk</dt>
- <dd>Relax the vmdk regex for parsing metadata out of the filename. <em>Commit <a href="https://review.openstack.org/68821">68821</a> merged 25 Jan 2014</em></dd>
- <dt>Keystone use common logging</dt>
- <dd>Switch Keystone to use common logging configuration. <em>Commit <a href="https://review.openstack.org/68530">68530</a> merged 25 Jan 2014</em></dd>
- <dt>Do not set bind_host for Heat APIs</dt>
- <dd>Let the Heat API service bind to 0.0.0.0 to be consistent with other DevStack services. <em>Commit <a href="https://review.openstack.org/67683">67683</a> merged 25 Jan 2014</em></dd>
- <dt>Fine tune libvirt logging</dt>
- <dd>Disable client-side log and tune server-side to usable levels. <em>Commit <a href="https://review.openstack.org/68194">68194</a> merged 24 Jan 2014</em></dd>
- <dt>Add check framework for Neutron server/backen integration</dt>
- <dd>Add the framework to verify Neutron controllers and backend service configurations per plugin requirements. <em>Commit <a href="https://review.openstack.org/64754">64754</a> merged 17 Jan 2014</em></dd>
- <dt>Add Marconi to Tempest config</dt>
- <dd>Check if Marconi is enabled in Tempest configuration <em>Commit <a href="https://review.openstack.org/65478">65478</a> merged 13 Jan 2014</em></dd>
- <dt>Enable libvirt logging</dt>
- <dd>Enable server- and client-side logging for libvirt <em>Commit <a href="https://review.openstack.org/65834">65834</a> merged 13 Jan 2014</em></dd>
- <dt>Clean up Heat/Cloud Formation catalog template</dt>
- <dd>The service catalog entries in the template file for orchestration and cloud formation were out of whack. <em>Commit <a href="https://review.openstack.org/65916">65916</a> merged 13 Jan 2014</em></dd>
- <dt>Create Ceilometer service accounts</dt>
- <dd>Create the Ceilometer service accounts in Keystone. <em>Commit <a href="https://review.openstack.org/65678">65678</a> merged 13 Jan 2014</em></dd>
- <dt>Freshen Ubuntu supported eleases</dt>
- <dd>Remove oneiric and quantal support. <em>Commit <a href="https://review.openstack.org/64836">64836</a> merged 13 Jan 2014</em></dd>
- <dt>Strengthen server shutdown</dt>
- <dd>Add screen_stop() to kill server process groups to ensure shutdown of child processes. <em>Commit <a href="https://review.openstack.org/66080">66080</a> merged 13 Jan 2014</em></dd>
- <dt>Support for VMware NSX plugin</dt>
- <dd>This is the Nicira NVP plugin renamed. <em>Commit <a href="https://review.openstack.org/65002">65002</a> merged 13 Jan 2014</em></dd>
- <dt>Remove --tenant_id usage</dt>
- <dd>Remove remaining uses of <code>--tenant_id</code> and replace with <code>--tenant-id</code>. <em>Commit <a href="https://review.openstack.org/65682">65682</a> merged 11 Jan 2014</em></dd>
- <dt>Identity API version configuration for Cinder, Glance and Heat</dt>
- <dd>Use IDENTITY_API_VERISON to configure Cinder, Glance and Heat. <em>Commit <a href="https://review.openstack.org/57620">57620</a> merged 11 Jan 2014</em></dd>
- <dt>Fedora 20 Support</dt>
- <dd>Add support for Fedora 20, remove Fedora 16 and 17 support. <em>Commit <a href="https://review.openstack.org/63647">63647</a> merged 11 Jan 2014</em></dd>
- <dt>Trove service availablility in Tempest</dt>
- <dd>Check if Trove is enabled in Tempest configuration. <em>Commit <a href="https://review.openstack.org/64913">64913</a> merged 11 Jan 2014</em></dd>
-
- <dt>Change libvirtd log level to DEBUG</dt>
- <dd>Keep libvirtd logs in the gate log stash. <em>Commit <a href="https://review.openstack.org/63992">63992</a> merged 02 Jan 2014</em></dd>
- <dt>Fix section start bug in get_meta_section()</dt>
- <dd>get_meta_section() would incorrectly interpret '[[' and ']]' in shell command lines inside local.conf. <em>Commit <a href="https://review.openstack.org/63280">63280</a> merged 21 Dec 2013</em></dd>
- <dt>Use Fedora 20 final release</dt>
- <dd>Set the URL to get the final Fedora 20 release image. <em>Commit <a href="https://review.openstack.org/63200">63200</a> merged 21 Dec 2013</em></dd>
- <dt>Use Ubuntu 'saucy' as XenAPI DomU</dt>
- <dd>Updated the XenAPI DomU to Ubuntu Saucy release. <em>Commit <a href="https://review.openstack.org/60107">60107</a> merged 21 Dec 2013</em></dd>
- <dt>Begin support for RHEL7 Beta</dt>
- <dd>Adjust the sed regex in GetOSVersion() to handle text between the version and codename in RHEL-style release strings. <em>Commit <a href="https://review.openstack.org/62543">62543</a> merged 17 Dec 2013</em></dd>
- <dt>Configure Tempest tests for network extensions</dt>
- <dd>Puts the value of NETWORK_API_EXTENSIONS into tempest.conf. <em>Commit <a href="https://review.openstack.org/62054">62054</a> merged 17 Dec 2013</em></dd>
- <dt>Default floating IP range to /24</dt>
- <dd>Set the default FLOATING_RANGE=172.24.4.0/24 to accomodate parallel testing in Tempest. <em>Commit <a href="https://review.openstack.org/58284">58284</a> merged 17 Dec 2013</em></dd>
- <dt>Support oslo-rootwrap in Cinder</dt>
- <dd>Cinder can use both cinder-rootwrap and oslo-rootwrap for a transitional period. <em>Commit <a href="https://review.openstack.org/62003">62003</a> merged 16 Dec 2013</em></dd>
- <dt>Heat tests can use test image if present</dt>
- <dd>If HEAT_CREATE_TEST_IMAGE is defined and the image named in its value is present Heat will use it rather than invoke diskimage-builder. <em>Commit <a href="https://review.openstack.org/59893">59893</a> merged 15 Dec 2013</em></dd>
- <dt>Fix iniset() pipe ('|') bug</dt>
- <dd>iniset() did not properly handle a value containing a pipe ('|') character. <em>Commit <a href="https://review.openstack.org/60170">60170</a> merged 14 Dec 2013</em></dd>
- <dt>Define Q_L3_ENABLED=True for MidoNet plugin</dt>
- <dd>Q_L3_ENABLED=True for MidoNet plugin. <em>Commit <a href="https://review.openstack.org/56459">56459</a> merged 12 Dec 2013</em></dd>
- <dt>Fix Swift workers for non-proxies</dt>
- <dd>Swift spawned more proxy workers than required in DevStack environments, reduce it to '1'. <em>Commit <a href="https://review.openstack.org/61122">61122</a> merged 12 Dec 2013</em></dd>
- <dt>Add Keystone auth port to Nova config</dt>
- <dd>Set keystone_authtoken:auth_port to KEYSTONE_AUTH_PORT in nova.conf. <em>Commit <a href="https://review.openstack.org/60736">60736</a> merged 12 Dec 2013</em></dd>
- <dt>Increase additional flavor RAM on ppc64</dt>
- <dd>Create the nano and micro flavors with 128MB and 256MB RAM respectively on ppc64. <em>Commit <a href="https://review.openstack.org/60606">60606</a> merged 12 Dec 2013</em></dd>
- <dt>Increase XenAPI DomU memory</dt>
- <dd>Increase XenAPI DomU memory (OSDOMU_MEM_MB) to 4GB by default. <em>Commit <a href="https://review.openstack.org/59792">59792</a> merged 10 Dec 2013</em></dd>
- <dt>Assign unique names to fake nova-computes</dt>
- <dd>Assign each fake nova-compute instance a unique name so the scheduler works properly. <em>Commit <a href="https://review.openstack.org/58700">58700</a> merged 09 Dec 2013</em></dd>
- <dt>Fix whitespace bugs in merge_config_file()</dt>
- <dd>Merge_config_file() did not properly skip lines with only whitespace. <em>Commit <a href="https://review.openstack.org/60112">60112</a> merged 09 Dec 2013</em></dd>
- <dt>Setup Savanna user and endpoints</dt>
- <dd>Create savanna user, and savanna and data_processing endpoints. <em>Commit <a href="https://review.openstack.org/60077">60077</a> merged 09 Dec 2013</em></dd>
- <dt>Display DevStack status at XenAPI DomU login</dt>
- <dd>Add DevStack setup setvice status to /etc/issue so it is displayed at the DomU login. <em>Commit <a href="https://review.openstack.org/48444">48444</a> merged 09 Dec 2013</em></dd>
- <dt>Fix install_get_pip using proxy</dt>
- <dd>install_get_pip did not work properly through an HTTP proxy. <em>Commit <a href="https://review.openstack.org/60242">60242</a> merged 07 Dec 2013</em></dd>
- <dt>Add color logs to Trove</dt>
- <dd>Add the colorized log output to Trove. <em>Commit <a href="https://review.openstack.org/58363">58363</a> merged 06 Dec 2013</em></dd>
- <dt>Update LDAP support</dt>
- <dd>Update DevStack's LDAP support to make the DN configurable. <em>Commit <a href="https://review.openstack.org/58590">58590</a> merged 06 Dec 2013</em></dd>
- <dt>Add Marconi support</dt>
- <dd>Add Marconi support via extras.d plugin. <em>Commit <a href="https://review.openstack.org/47999">47999</a> merged 05 Dec 2013</em></dd>
- <dt>Split Ceilometer collector service</dt>
- <dd>Split Celio collector service into ceilometer-collector and ceilometer-agent-notification. <em>Commit <a href="https://review.openstack.org/58600">58600</a> merged 05 Dec 2013</em></dd>
- <dt>Add 'post-extra' configuration phase</dt>
- <dd>The 'post-extra' configuration phase is processed after the 'extra' plugin phase is executed. <em>Commit <a href="https://review.openstack.org/55583">55583</a> merged 05 Dec 2013</em></dd>
- <dt>Enable user interaction with stack.sh in XenAPI</dt>
- <dd>Multiple changes to how DevStack is configured to run under XenServer. <em>Commit <a href="https://review.openstack.org/48092">48092</a> merged 04 Dec 2013</em></dd>
- <dt>Handle VMDK metadata</dt>
- <dd>upload_image.sh now attempts to get the metadata for a *.vmdk file from *-flat.vmdk. <em>Commit <a href="https://review.openstack.org/58356">58356</a> merged 04 Dec 2013</em></dd>
- <dt>Deploy Keystone with SSL</dt>
- <dd>Configure Keystone to use SSL rather than using the tls-proxy function. <em>Commit <a href="https://review.openstack.org/47076">47076</a> merged 03 Dec 2013</em></dd>
- <dt>change default Git base to git.openstack.org</dt>
- <dd>Pull repos from git.openstack.org where possible. <em>Commit <a href="https://review.openstack.org/56749">56749</a> merged 02 Dec 2013</em></dd>
- <dt>Fix Neutron color log format</dt>
- <dd>Fix Neutron color log format. <em>Commit <a href="https://review.openstack.org/58350">58350</a> merged 01 Dec 2013</em></dd>
- <dt>Truncate PKI token logging</dt>
- <dd>Limit the PKI token logged to 12 characters. <em>Commit <a href="https://review.openstack.org/57526">57526</a> merged 26 Nov 2013</em></dd>
- <dt>Support memchace for Keystone token backend</dt>
- <dd>Use memcache backend with KEYSTONE_TOKEN_BACKEND=memcache. <em>Commit <a href="https://review.openstack.org/56691">56691</a> merged 26 Nov 2013</em></dd>
- <dt>Remove powervm hypervisor support</dt>
- <dd>The powervm arch is EOL, will be replaced in the future. <em>Commit <a href="https://review.openstack.org/57789">57789</a> merged 25 Nov 2013</em></dd>
- <dt>Increase default Swift timeouts</dt>
- <dd>node_timeout and conn_timeout needed to be increased for testing in slow VM environments. <em>Commit <a href="https://review.openstack.org/57514">57514</a> merged 25 Nov 2013</em></dd>
- <dt>Add hacking rules for shell scripts</dt>
- <dd>Record the previously unwritten rules for common formatting. <em>Commit <a href="https://review.openstack.org/55024">55024</a> merged 24 Nov 2013</em></dd>
- <dt>Default to Cinder API v2</dt>
- <dd>Set OS_VOLUME_API_VERSION=2 by default. <em>Commit <a href="https://review.openstack.org/43045">43045</a> merged 22 Nov 2013</em></dd>
- <dt>Drop nodejs dependency</dt>
- <dd>Horizon no longer needs nodejs. <em>Commit <a href="https://review.openstack.org/55255">55255</a> merged 22 Nov 2013</em></dd>
- <dt>Change to use STACK_USER rather than USER</dt>
- <dd>This changes DevStack to use STACK_USER for username references. <em>Commit <a href="https://review.openstack.org/57212">57212</a> merged 22 Nov 2013</em></dd>
- <dt>Enable specifying FLAT_NETWORK_BRIDGE in XenAPI</dt>
- <dd>Set FLAT_NETWORK_BRIDGE for DomU. <em>Commit <a href="https://review.openstack.org/48296">48296</a> merged 22 Nov 2013</em></dd>
- <dt>Add file: URL handling to upload_image.sh</dt>
- <dd>upload_image.sh can now use file: URLs as the image source. <em>Commit <a href="https://review.openstack.org/56721">56721</a> merged 21 Nov 2013</em></dd>
- <dt>Increase Swift backing storage size</dt>
- <dd>Swift now has SWIFT_LOOPBACK_DISK_SIZE_DEFAULT=6G. <em>Commit <a href="https://review.openstack.org/56116">56116</a> merged 13 Nov 2013</em></dd>
- <dt>Fix Horizon config for Apache 2.4</dt>
- <dd>Handle Apache config differences according to Apache version and not distro release. <em>Commit <a href="https://review.openstack.org/54738">54738</a> merged 11 Nov 2013</em></dd>
- <dt>Add FORCE_CONFIG_DRIVE to nova.conf</dt>
- <dd>Add FORCE_CONFIG_DRIVE, default to 'always'. <em>Commit <a href="https://review.openstack.org/54746">54746</a> merged 01 Nov 2013</em></dd>
- <dt>Turn off Nova firewall when using Neutron</dt>
- <dd>Sets firewall_driver when Neutron is enabled. <em>Commit <a href="https://review.openstack.org/54827">54827</a> merged 01 Nov 2013</em></dd>
- <dt>Use nova.conf for auth_token config</dt>
- <dd>Eliminates using api-paste.ini. <em>Commit <a href="https://review.openstack.org/53212">53212</a> merged 01 Nov 2013</em></dd>
- <dt>Use conder.conf for auth_token config</dt>
- <dd>Eliminates using api-paste.ini. <em>Commit <a href="https://review.openstack.org/53213">53213</a> merged 01 Nov 2013</em></dd>
- <dt>Add Cinder support for NFS driver</dt>
- <dd>Set CINDER_DRIVER=nfs. <em>Commit <a href="https://review.openstack.org/53276">53276</a> merged 31 Oct 2013</em></dd>
- <dt>Add Postgres for Ceilometer backend</dt>
- <dd>Set CEILOMETER_BACKEND=postgresql. <em>Commit <a href="https://review.openstack.org/53715">53715</a> merged 31 Oct 2013</em></dd>
- <dt>Add Ubuntu Trusty</dt>
- <dd>Add support for Ubuntu Trusty. <em>Commit <a href="https://review.openstack.org/53965">53965</a> merged 30 Oct 2013</em></dd>
- <dt>Enable Keystone auth in Ironic</dt>
- <dd>Always use Keystone for Ironic auth. <em>Commit <a href="https://review.openstack.org/51845">51845</a> merged 25 Oct 2013</em></dd>
- <dt>Add bash8 tool</dt>
- <dd>Perform basic format tests in CI gate. <em>Commit <a href="https://review.openstack.org/51676">51676</a> merged 23 Oct 2013</em></dd>
- <dt>Add Savanna support</dt>
- <dd>Add Savanna support using the plugin mech. <em>Commit <a href="https://review.openstack.org/50601">50601</a> merged 22 Oct 2013</em></dd>
- <dt>Install Ironic client</dt>
- <dd>Add python-ironicclient repo. <em>Commit <a href="https://review.openstack.org/51853">51853</a> merged 22 Oct 2013</em></dd>
-
- <dt>Fix fixup_stuff.sh for RHEL6</dt>
- <dd>fixup_stuff.sh didn't work properly on RHEL6/Python 2.6 <em>Commit <a href="https://review.openstack.org/52176">52176</a> merged 17 Oct 2013</em></dd>
- <dt>Add more extras.d hooks</dt>
- <dd>Add more hooks to call into extras.d to support adding services without modifying core DevStack scripts. <em>Commit <a href="https://review.openstack.org/51939">51939</a> merged 16 Oct 2013</em></dd>
- <dt>Add run_tests.sh</dt>
- <dd>Add a simple run_tests.sh for running bash8. <em>Commit <a href="https://review.openstack.org/51711">51711</a> merged 15 Oct 2013</em></dd>
- <dt>Add bash8 style checker</dt>
- <dd>Add bash8 to check certain style constraints in shell scripts. <em>Commit <a href="https://review.openstack.org/51676">51676</a> merged 15 Oct 2013</em></dd>
- <dt>Add trove-conductor service</dt>
- <dd>Add trove-conductor service to Trove support <em>Commit <a href="https://review.openstack.org/49237">49237</a> merged 14 Oct 2013</em></dd>
- <dt>Add new local configuration file</dt>
- <dd>Add local.conf support to replace localrc, includes ability to set values in arbitrary config files. <em>Commit <a href="https://review.openstack.org/46768">46768</a> merged 14 Oct 2013</em></dd>
- <dt>Remove 'stack' account creation from stack.sh</dt>
- <dd>This forces stack.sh to not be run as root and provides the account creation as a separate script. <em>Commit <a href="https://review.openstack.org/49798">49798</a> merged 05 Oct 2013</em></dd>
- <dt>Support running Keystone under Apache</dt>
- <dd>Run Keystone under Apache along with Swift and Horizon <em>Commit <a href="https://review.openstack.org/46866">46866</a> merged 26 Sep 2013</em></dd>
- <dt>Increase default Swift storage for Tempest</dt>
- <dd>When Tempest is enabled increase the default Swift storage to 4Gb <em>Commit <a href="https://review.openstack.org/46770">46770</a> merged 18 Sep 2013</em></dd>
- <dt>Keystone mixed backend support</dt>
- <dd>Add support for Keystone to use mixed backend configuration <em>Commit <a href="https://review.openstack.org/44605">44605</a> merged 16 Sep 2013</em></dd>
- <dt>Add Trove support</dt>
- <dd>Add support for Trove service <em>Commit <a href="https://review.openstack.org/38169">38169</a> merged 12 Sep 2013</em></dd>
- <dt>Enable Neutron L3 plugon</dt>
- <dd>Support Neutron's L3 plugin <em>Commit <a href="https://review.openstack.org/20909">20909</a> merged 10 Sep 2013</em></dd>
- <dt>Configure VPNaaS panel in Horizon</dt>
- <dd>If enabled configure VPNaaS panel in Horizon by default <em>Commit <a href="https://review.openstack.org/45751">45751</a> merged 10 Sep 2013</em></dd>
- <dt>Enable multi-threaded Nova API servers</dt>
- <dd>Sets the Nova API servers to use four worker threads by default <em>Commit <a href="https://review.openstack.org/45314">45314</a> merged 09 Sep 2031</em></dd>
- <dt>Handle .vmdk custom properties</dt>
- <dd>Parses property values out of the .vmdk filename and includes them in the glance upload. <em>Commit <a href="https://review.openstack.org/45181">45181</a> merged 09 Sep 2013</em></dd>
- <dt>Use pip 1.4.1</dt>
- <dd>Adds pip option --pre <em>Commit <a href="https://review.openstack.org/45436">45436</a> merged 06 Sep 2031</em></dd>
- <dt>Rename Ceilometer alarm service</dt>
- <dd>Change 'ceilometer-alarm-eval' to 'ceilometer-alarm-singleton' and 'ceilometer-alarm-notifier' <em>Commit <a href="https://review.openstack.org/45214">45214</a> merged 06 Sep 2013</em></dd>
- <dt>Support OpenSwan in Neutron VPNaaS</dt>
- <dd>Neutron VPNaaS changed IPSec to OpenSwan <em>Commit <a href="https://review.openstack.org/42265">42265</a> merged 06 Sep 2013</em></dd>
- <dt>Change Ceilometer backend default to MySQL</dt>
- <dd>Issues with MongoDB 2.4 availability resulted in this change <em>Commit <a href="https://review.openstack.org/43851">43851</a> merged 05 Sep 2013</em></dd>
- <dt>Add Ironic support</dt>
- <dd>Add Ironic as a supported service <em>Commit <a href="https://review.openstack.org/41053">41053</a> merged 03 Sep 2013</em></dd>
- <dt>Add support for Docker hypervisor</dt>
- <dd>Add Docker support and the hypervisor plugin mechanism <em>Commit <a href="https://review.openstack.org/40759">40759</a> merged 30 Aug 2013</em></dd>
-
- <dt>Add support for Heat resource templates</dt>
- <dd>Install Heat resource template files. <em>Commit <a href="https://review.openstack.org/43631">43631</a> merged 29 Aug 2013</em></dd>
- <dt>Support Neutron FWaaS</dt>
- <dd>Add support for OpenStack Networking Firewall (FWaaS). <em>Commit <a href="https://review.openstack.org/37147">37147</a> merged 29 Aug 2013</em></dd>
- <dt>Add new #testonly tag for package prereq files</dt>
- <dd>Add INSTALL_TESTONLY_PACKAGES to enable installing packages marked with #testonly. These packages are required only for running unit tests. <em>Commit <a href="https://review.openstack.org/38127">38127</a> merged 28 Aug 2013</em></dd>
- <dt>Add support for Heat environments</dt>
- <dd>Install Heat global environment config files. <em>Commit <a href="https://review.openstack.org/43387">43387</a> merged 28 Aug 2013</em></dd>
- <dt>Configure bash completion</dt>
- <dd>Configure bash completion for cinder, keystone, neutron, nova and nova-manage. <em>Commit <a href="https://review.openstack.org/41928">41928</a> merged 26 Aug 2013</em></dd>
- <dt>Change horizon Apache config file to horizon.conf</dt>
- <dd>Add .conf to horizon Apache config file to be consistent with Fedora practice. <em>Commit <a href="https://review.openstack.org/40352">40352</a> merged 22 Aug 2013</em></dd>
- <dt>Echo service start failures</dt>
- <dd>Echo service start failures to console so status is obvious in gate logs. <em>Commit <a href="https://review.openstack.org/42427">42427</a> merged 16 Aug 2013</em></dd>
- <dt>Colorize Heat logs</dt>
- <dd>Add Nova-style color support to Heat logging. <em>Commit <a href="https://review.openstack.org/40342">40342</a> merged 16 Aug 2013</em></dd>
- <dt>Add Cinder support to VMware configuration</dt>
- <dd>Configures Cinder to use VMware backend. <em>Commit <a href="https://review.openstack.org/41612">41612</a> merged 15 Aug 2013</em></dd>
- <dt>Default PIP_USE_MIRRORS to False</dt>
- <dd>Pip mirrors no longer used by default, can stil be enabled with PIP_USE_MIRRORS=True. <em>Commit <a href="https://review.openstack.org/40623">40623</a> merged 13 Aug 2013</em></dd>
- <dt>Configure Volume API v2</dt>
- <dd>Configure both SQL and template backends with Volume API v2. <em>Commit <a href="https://review.openstack.org/22489">22489</a> merged 13 Aug 2013</em></dd>
- <dt>Enable Tempest debug logging</dt>
- <dd>Enable Tempest debug logging for the same output verbosity under testr. <em>Commit <a href="https://review.openstack.org/41113">41113</a> merged 12 Aug 2013</em></dd>
- <dt>Configure Cinder for Ceilometer notifications</dt>
- <dd>Enable Cinder to send notifications when Ceilometer is enabled. <em>Commit <a href="https://review.openstack.org/41108">41108</a> merged 12 Aug 2013</em></dd>
- <dt>Support Heat-only configuration</dt>
- <dd>Allows stack.sh to start a standalone Heat installation. <em>Commit <a href="https://review.openstack.org/39602">39602</a> merged 10 Aug 2013</em></dd>
- <dt>Add tools/install_pip.sh</dt>
- <dd>Install pip from source in order to get a current version rather than the out-of-date OS-supplied version <em>Commit <a href="https://review.openstack.org/39827">39827</a> merged 08 Aug 2013</em></dd>
- <dt>Add call trace to error message</dt>
- <dd>Display the bash call stack in die() output. <em>Commit <a href="https://review.openstack.org/39887">39887</a> merged 08 Aug 2013</em></dd>
- <dt>Configure Keystone client in Cinder</dt>
- <dd>Configure auth creds in Cinder to allow queries to Keystone. <em>Commit <a href="https://review.openstack.org/39747">39747</a> merged 07 Aug 2013</em></dd>
- <dt>Update all repos to global requirements</dt>
- <dd>Force update project repos to global requirements before tests. <em>Commit <a href="https://review.openstack.org/35705">35705</a> merged 06 Aug 2013</em></dd>
- <dt>Don't add 'bulk' middleware for Swift</dt>
- <dd>The bulk middleware is already in the sample so don't add it. <em>Commit <a href="https://review.openstack.org/39826">39826</a> merged 06 Aug 2013</em></dd>
- <dt>Enable using Apache as Swift frontend</dt>
- <dd>Refactor apache functions into lib/apache; configure apache2 vhost and wsgi files for Swift proxy, account, container and object server. <em>Commit <a href="https://review.openstack.org/33946">33946</a> merged 02 Aug 2013</em></dd>
- <dt>Launch Ceilometer alarm services</dt>
- <dd>Add ceilometer-alarm-notify and ceilometer-alarm-eval to the Ceilometer services. <em>Commit <a href="https://review.openstack.org/39300">39300</a> merged 01 Aug 2013</em></dd>
- <dt>Fix Tempest logging configuration</dt>
- <dd>Correctly set the tempest output logging to dump all of tempest logs into a tempest.log file. Also fixes logging in the gate no longer print every log message on the console. <em>Commit <a href="https://review.openstack.org/39571">39571</a> merged 31 Jul 2013</em></dd>
- <dt>Install Oslo from source</dt>
- <dd>Install the gradulated Oslo libraries from source into $DEST. <em>Commit <a href="https://review.openstack.org/39450">39450</a> merged 31 Jul 2013</em></dd>
- <dt>Multiple fixes for cell support</dt>
- <dd>Start Nova cell services; skip unsupported exercises; use 'default' security group in exercises. <em>Commit <a href="https://review.openstack.org/38897">38897</a> merged 29 Jul 2013</em></dd>
- <dt>Add MySQL support for Ceilometer</dt>
- <dd>Add MySQL storage support for Ceilometer. <em>Commit <a href="https://review.openstack.org/37413">37413</a> merged 19 Jul 2013</em></dd>
- <dt>Create Compute API v3 endpoint</dt>
- <dd>Configures SQL and templated backends for Compute API v3. The service type is 'computev3'. <em>Commit <a href="https://review.openstack.org/33277">33277</a> merged 18 Jul 2013</em></dd>
- <dt>Add Neutron VPNaaS support</dt>
- <dd>Add Support for OpenStack Networking VPNaaS (IPSec) <em>Commit <a href="https://review.openstack.org/32174">32174</a> merged 15 Jul 2013</em></dd>
- <dt>Configure Swift functional tests</dt>
- <dd>The default Swift configuration now has functional tests properly configured. <em>Commit <a href="https://review.openstack.org/35793">35793</a> merged 12 Jul 2013</em></dd>
- <dt>Enable all Nova notifications</dt>
- <dd>Nova now sends all notification events to Ceilometer. <em>Commit <a href="https://review.openstack.org/35258">35258</a> merged 08 Jul 2013</em></dd>
- <dt>Add IDENTITY_API_VERSION</dt>
- <dd>IDENTITY_API_VERSION defaults to '2.0', enables setting '3' for API v3. <em>Commit <a href="https://review.openstack.org/34884">34884</a> merged 08 Jul 2013</em></dd>
-
- <dt>Rename Qunatum repos to Neutron</dt>
- <dd>Part of the project renaming process. This does not change the process short names ('q-api', 'q-agt', etc) or the variable names starting with <code>Q_</code>. Some Nova and Horizon changes remain to be completed. The change has been applied to stable/grizzly (35860) and stable/folsom (35861). <em>Commit <a href="https://review.openstack.org/35981">35981, 35861, 35860, 35859</a> merged 07 Jul 2013</em></dd>
- <dt>Direct installation of Python prerequisites</dt>
- <dd>Python prereqs are now installed by pip (and not easy_install) directly from each projects <code>requirements.txt</code>. <em>Commit <a href="https://review.openstack.org/35696">35696</a> merged 07 Jul 2013</em></dd>
- <dt>Enable Fedora 19</dt>
- <dd>Fedora 19 is now supported directly. <em>Commit <a href="https://review.openstack.org/35071">35071</a> merged 03 Jul 2013</em></dd>
- <dt>Fix Cinder clones on RHEL 6/CentOS</dt>
- <dd>On RHEL 6/CentOS 6 cloned LVM volumes required more space for metadata storage. <em>Commit <a href="https://review.openstack.org/34640">34640</a> merged 28 Jun 2013</em></dd>
- <dt>Use lower case section names in Quantum</dt>
- <dd>The service code now supports lowercase section names, change DevStack to use them by default. <em>Commit <a href="https://review.openstack.org/34177">34177</a> merged 27 Jun 2013</em></dd>
- <dt>Set default VOLUME_BACKING_FILE to 10Gb</dt>
- <dd>The previous default of 5Gb was not large enough for some Tempest usage requirements. <em>Commit <a href="https://review.openstack.org/33885">33885</a> merged 21 Jun 2013</em></dd>
- <dt>Use service role for service accounts</dt>
- <dd>Use an account with service role assigned rather than a full admin account for swift, heat, ceilometer. Ceilometer was later restored to admin account in <a href="https://review.openstack.org/33838">33838</a>. <em>Commit <a href="https://review.openstack.org/31687">31687</a> merged 16 Jun 2013</em></dd>
- <dt>Enable Nova API v3</dt>
- <dd>Nova disables API v3 by default so explicitly enable it. <em>Commit <a href="https://review.openstack.org/311980">31190</a> merged 31 May 2013</em></dd>
- <dt>Enable Debian support</dt>
- <dd>Allows Devstack to run on Debian as an unsupported OS. <em>Commit <a href="https://review.openstack.org/28215">28215</a> merged 9 May 2013</em></dd>
- <dt>Default SWIFT_DATA_DIR to use $DATA_DIR</dt>
- <dd>Previously SWIFT_DATA_DIR was in $DEST/data. <em>Commit <a href="https://review.openstack.org/27749">27749</a> merged 3 May 2013</em></dd>
- <dt>Set default S3_URL port to 8080</dt>
- <dd>Set port to 8080 if swift3 is enabled, previously was nove objectstore value of 3333. <em>Commit <a href="https://review.openstack.org/27404">27404</a> merged 25 Apr 2013</em></dd>
-
- <dt>Use example settings in horizon repo as local_settings.py</dt>
- <dd>Removes <code>files/horizon_settings.py</code> and copies the same file from the Horizon repo. <em>Commit <a href="https://review.openstack.org/25510">25510</a> merged 28 Mar 2013</em></dd>
- <dt>Add support for iso files as glance images</dt>
- <dd>Add support for iso files as glance images <em>Commit <a href="https://review.openstack.org/25290">25290</a> merged 28 Mar 2013</em></dd>
-
- <dt>Allow processes to run without screen</dt>
- <dd>Add <code>USE_SCREEN=False</code> to <code>localrc</code> to cause all server processes to run without <code>screen</code>. This is expected to be used primarily in the CI tests and should address the failures seen when <code>screen</code> does not start a process. <em>Commit <a href="https://review.openstack.org/23148">23148</a> merged 20 Mar 2013</em></dd>
- <dt>Add clean.sh</dt>
- <dd>This is intended to remove as much of the non-packaged (both OS and pip) remnants of DevStack from the system. It is suitable for changing queue managers and databases as those packages are uninstalled. It does not change the network configuration that Nova performs nor does it even attempt to undo anything that Quantum does. <em>Commit <a href="https://review.openstack.org/24360">24360</a> merged 15 Mar 2013</em></dd>
- <dt>Add support for running a specific set of exercises</dt>
- <dd>Set <code>RUN_EXERCISES</code> to a comma separated list of exercise names to run. <code>SKIP_EXERCISES</code> is ignored in this mode. <em>Commit <a href="https://review.openstack.org/23846">23846</a> merged 15 Mar 2013</em></dd>
- <dt>Deprecate DATABASE_TYPE and use_database</dt>
- <dd>This changes the way that a database is selected back to using only <code>ENABLED_SERVICES</code>. Backward compatibility is maintained until after Grizzly is released. <em>Commit <a href="https://review.openstack.org/22635">22635</a> merged 22 Feb 2013</em></dd>
- <dt>Create tools/install_prereqs.sh</dt>
- <dd>This factors out the installation/upgrade of required OS packages so it can run independently of <code>stack.sh</code>. It also adds a time marker so it does not run again within a set amount of time, default is 2 hours. Remove <code>.prereqs</code> to reset this timeout. <em>Commit <a href="https://review.openstack.org/21397">21397</a> merged 10 Feb 2013</em></dd>
- <dt>Add initial LDAP support</dt>
- <dd>Installs and configures OpenLDAP for Keystone. Select by adding <code>enable_service ldap</code> and <code>KEYSTONE_IDENTITY_BACKEND=ldap</code> to <code>localrc</code>. <em>Commit <a href="https://review.openstack.org/20249">20249</a> merged 07 Feb 2013</em></dd>
- <dt>Add variable to set Keystone token backend</dt>
- <dd>Change the default Keystone token backend from <code>kvs</code> to <code>sql</code>by setting <code>KEYSTONE_TOKEN_BACKEND=sql</code>. <em>Commit <a href="https://review.openstack.org/20739">20739</a> merged 30 Jan 2013</em></dd>
- <dt>Add Sheepdog support in Cinder</dt>
- <dd>This enables using Sheepdog as a Cinder backend storage by setting <code>CINDER_DRIVER=sheepdog</code>. <em>Commit <a href="https://review.openstack.org/19931">19931</a> merged 18 Jan 2013</em></dd>
- <dt>Support SPICE</dt>
- <dd>Adds an 'n-spice' service (off by default) that supports SPICE in the Nova libvirt driver. It also allows running in a SPICE only environment. <em>Commit <a href="https://review.openstack.org/19934">19934</a> merged 18 Jan 2013</em></dd>
-
- <dt>Add a mechanism to automatically load additional projects at the end of <code>stack.sh</code></dt>
- <dd>This differs from loca.sh in that scripts can be dropped into <code>local.d</code> and <code>stack.sh</code> will source them in alphanumeric order. <em>Commit <a href="https://review.openstack.org/19367">19367</a> merged 11 Jan 2013</em></dd>
- <dt>Add support fir <code>baremetal</code> hypervisor</dt>
- <dd>This is the first of a set of commits that enable baremetal support. <em>Commit <a href="https://review.openstack.org/15941">15941</a> merged 28 Dec 2012</em></dd>
- <dt>Add support for OpenSuSE 12.2</dt>
- <dd>This is actually just the commit to remove the need for FORCE=yes, OpenSuSE support has been coming along for a while. <em>Commit <a href="https://review.openstack.org/18479">18479</a> merged 27 Dec 2012</em></dd>
- <dt>Save selected environment variables from <code>stack.sh</code> for later use</dt>
- <dd>Write a set of environment variables to <code>.stackenv</code> so they can be quickly used by other scripts. These are mostly the variables that are derived and not statically set. <code>.stackenv</code> is overwritten on every <code>stack.sh</code> run.<em>Commit <a href="https://review.openstack.org/18094">18094</a> merged 19 Dec 2012</em></dd>
- <dt>Enable Tempest by default</dt>
- <dd>Tempest is now downloaded and configured by default in DevStack. This is to encourage more developers to use it as part of their workflow. Tempest configuration is now handled in <code>lib/tempest</code>; <code>tools/configure_tempest.sh</code> has been removed. <em>Commit <a href="https://review.openstack.org/17808">17808</a> merged 12 Dec 2012</em></dd>
-
- <dt>Add PostgreSQL support</dt>
- <dd>Adds an abstraction layer to database configuration and adds support for PostgreSQL under that. MySQL is still the default. To use add <code>use_database postgresql</code> to <code>localrc</code><em>Commit <a href="https://review.openstack.org/15224">15224</a> merged 05 Nov 2012</em></dd>
- <dt>Add PKI token configuration support</dt>
- <dd>Adds configuration KEYSTONE_TOKEN_FORMAT to select <code>PKI</code> or <code>UUID</code> token format. The default is <code>PKI</code>.<em>Commit <a href="https://review.openstack.org/14895">14895</a> merged 29 Oct 2012</em></dd>
- <dt>Add Ubuntu Raring Ringtail support</dt>
- <dd>Adds raring to the list of supported Ubuntu releases<em>Commit <a href="https://review.openstack.org/14692">14692</a> merged 24 Oct 2012</em></dd>
- <dt>Add support for Quantum Ryu plugin</dt>
- <dd>Ryu plugin lets Quantum link Open vSwitch and Ryu OpenFlow controller<em>Commit <a href="https://review.openstack.org/10117">10117</a> merged 20 Oct 2012</em></dd>
- <dt>Configure and launch HEAT API</dt>
- <dd>Creates a new enpoint using service type 'orchestration'.<em>Commit <a href="https://review.openstack.org/14195">14195</a> merged 10 Oct 2012</em></dd>
- <dt>Fix upload image handling</dt>
- <dd>Detect qcow, raw, vdi, vmdk image formats and sets Glance's disk format accordingly. <em>Commit <a href="https://review.openstack.org/13044">13044</a> merged 24 Sep 2012</em></dd>
-
- <dt>Add non-verbose output mode</dt>
- <dd>Set <code>VERBOSE=False</code> in <code>localrc</code> and see only periodic status messages on the screen. The detailed output continues to be written to the log file if <code>LOGFILE</code> is set. <em>Commit <a href="https://review.openstack.org/12996/">12996</a> merged 17 Sep 2012</em></dt>
-
- <dt>Move data directories out of source repos</dt>
- <dd>Data for Glance (images and cache) and Nova (instances, all state info) have historically been in the source repo. They have been moved to <code>$DEST/data/<project></code> by default. <em>Commit <a hred="https://review.openstack.org/12989/">12989</a> merged 14 Sep 2012</em></dd>
-
- <dt>Change default Keystone backend to SQL</dt>
- <dd>Keystone now uses the SQL backend by default enabling the use of the CRUD API; now <code>keystone service-create ...</code> works. Set <code>KEYSTONE_CATALOG_BACKEND=template</code> to maintain the previous behaviour. <em>Commit <a href="https://review.openstack.org/12746/">12746</a> merged 12 Sep 2012</em></dd>
-
- <dt>Set <code>FLAT_INTERFACE</code> for local-only use</dt>
- <dd>Allow <code>FLAT_INTERFACE</code> to be defined as <code>""</code> to prevent the <code>HOST_IP</code> from being moved to <code>br100</code>. <em>Commit <a href="https://review.openstack.org/12671/">12671</a> merged 09 Sep 2012</em></dd>
-
- <dt>Add support for Quantum L3 agent</dt>
- <dd>Only available with OpenVSwitch plugin. <em>Commit <a href="https://review.openstack.org/11380/">11380</a> merged 08 Sep 2012</em></dd>
-
- <dt>Configure Glance caching</dt>
- <dd>Configure Glance caching and cache management. <em>Commit <a href="https://review.openstack.org/12207/">12207</a> merged 07 Sep 2012</em></dd>
-
- <dt>Add ZeroMQ RPC backend</dt>
- <dd>Support ZeroMQ in addition to RabbitMQ and Qpid. But only one at a time. <em>Commit <a href="https://review.openstack.org/9278/">9278</a> merged 01 Sep 2012</em></dd>
-
- <dt>Add Heat support</dt>
- <dd>Support Heat via the standard <code>ENABLED_SERVICES</code> configuration. <em>Commit <a href="https://review.openstack.org/11266/">11266</a> merged 28 Aug 2012</em></dd>
-
- <dt>Ceilometer is now supported.</dt>
- <dd>There is a description of how to get stared with it at <a href="https://lists.launchpad.net/openstack/msg15940.html">https://lists.launchpad.net/openstack/msg15940.html</a>. <em>Commit <a href="https://review.openstack.org/10363">10363</a> merged 17 Aug 2012</em></dd>
-
- <dt>Colored logs in cinder</dt>
- <dd>Chmouel has brought Vishy's colored log file patch to Cinder. <em>Commit <a href="https://review.openstack.org/10769">10769</a> merged 16 Aug 2012</em></dd>
-
- <dt>Keystone auth middleware from Swift</dt>
- <dd>Swift now uses <code>keystoneauth</code> by default. Commit <a href="https://review.openstack.org/10876">10876</a> merged 16 Aug 2012</dd>
-
- <dt>Add support for <code>NO_PROXY</code></dt>
- <dd>Added support for the standard no_proxy environment variable. <em>Commit <a href="https://review.openstack.org/10264">10264</a> merged 10 Aug 2012</em></dd>
-
- <dt>Use default route to find <code>HOST_IP</code></dt>
- <dd>This changed the login for hunting down the value of <code>HOST_IP</code> to look on the interface that has the default route. <em>Commit <a href="https://review.openstack.org/9291">9291</a> merged 10 Aug 2012</em></dd>
-
- <dt>Enable testing of OpenVZ guests</dt>
- <dd>Allows using OpenVZ virtualization layer. <em>Commit <a href="https://review.openstack.org/10317">10317</a> merged 07 Aug 2012</em></dd>
-
- <dt>Cinder is the default volume service</dt>
- <dd>DevStack is now configured to use Cinder as its volume service by default. <em>Commit <a hred="https://review.openstack.org/9662">9662</a> merged 27 Jul 2012</em></dd>
-
- <dt>Increase size of default volume backing file</dt>
- <dd>The volume backing file is now 5Gb by default. It can still be set using <code>VOLUME_BACKING_FILE_SIZE</code> in <code>localrc</code>. <em>Commit <a href="https://review.openstack.org/9837">9837</a> merged 19 Jul 2012</em></dd>
-
- <dt>Configure pip cache location</dt>
- <dd>Set <code>PIP_DOWNLOAD_CACHE</code> to the location of a root-writable directory to cache downloaded packages. <em>Commit <a href="https://review.openstack.org/9766">9766</a> merged 16 Jul 2012</em></dd>
-
- <dt>Add functions to manipulate <code>ENABLED_SERVICES</code>
- <dd>Now <code>ENABLED_SERVICES</code> can be edited in <code>localrc</code> by using <code>enable_service()</code> and <code>disable_service()</code>. <em>Commit <a href="https://review.openstack.org/9407">9407</a> merged 13 Jul 2012</em></dd>
-
- <dt>Change default Nova virt driver configuration</dt>
- <dd>Change the Nova configuration to use <code>compute_driver</code> rather than <code>connection_type</code>. <em>Commit <a href="https://review.openstack.org/9635">9635</a> merged 13 Jul 2012</em></dd>
-
- <dt>Add support for Keystone PKI</dt>
- <dd>Initializes Keystone's PKI configuration to support delegation and scaling. <em>Commit <a href="https://review.openstack.org/9240">9240</a> merged 13 Jul 2012</em></dd>
-
- <dt>Disable Swift S3 support by default</dt>
- <dd>Swift's S3 API support can be enabled by adding <code>swift3</code> to <code>ENABLED_SERVICES</code>. Commit <a href="https://review.openstack.org/9346">9346</a> merged 12 Jul 2012</em></dd>
-
- <dt>Set libvirt CPU mode</dt>
- <dd>Force <code>libvirt_cpu_move=none</code> to work around some nested virtualization issues. <em>Commit <a href="https://review.openstack.org/9718">9718</a> merged 12 Jul 2012</em></dd>
-
- <dt>Support Nova rootwrap</dt>
- <dd>Add support for Nova rootwrap configuration. <em>Commits <a href="https://review.openstack.org/8842/">8842</a> merged 25 Jun 2012 and <a href="https://review.openstack.org/8750/">8750</a> merged 20 Jun 2012</em></dd>
-
- <dt>Add Cinder support</dt>
- <dd>Cinder can now be enabled by adding <code>c-api,c-sch,c-vol</code> to <code>ENABLED-sERVICES</code>. This also changed a couple of defaults: <code>VOLUME_GROUP</code> is now <code>stack-volumes</code> and <code>VOLUME_BACKING_FILE</code> is now <code>${DEST}/data/${VOLUME_GROUP}-backing-file</code>. <em>Commit <a href="https://review.openstack.org/7042">7042</a> merged 20 Jun 2012</em></dd>
-
- <dt>Set default image for exercises</dt>
- <dd>Set <code>DEFAULT_IMAGE_NAME</code> in <code>stackrc</code> to the cirros images downloaded. This avoids the ambiguous search for an 'ami'. <em>Commit <a href="https://review.openstack.org/7910">7910</a> merged 14 Jun 2012</em></dd>
-
- <dt>Use default Swift config files</dt>
- <dd>Use the configuration files shipped with Swift source rather than carrying files in the DevStack sources. <em>Commit <a href="https://review.openstack.org/8223">8223</a> merged 14 Jun 2012</em></dd>
-
- <dt>Install Swift client</dt>
- <dd>Install the new Swift client when Swift is enabled. <em>Commit <a href="https://review.openstack.org/7663">7663</a> merged 11 Jun 2012</em></dd>
-
- <dt>Use pip to install Python dependencies</dt>
- <dd>Use the dependencies in the <code>*.egg-info/requires.txt</code> and <code>*.egg-info/dependency_links.txt</code> to prevent <code>easy_install</code> from resolving the dependencies. <em>Commit <a href="https://review.openstack.org/8289">8289</a> merged 07 Jun 2012</em></dd>
-
- <dt>Remove support for DevStack pip dependencies</dt>
- <dd>Remove DevStack Python dependency files <code>files/pips/*</code> and handle Python dependencies through <code>tools/pip-requires</code>. <em>Commit <a href="https://review.openstack.org/8263">8263</a> merged 07 Jun 2012</em></dd>
-
- <dt>Update XenServer support</dt>
- <dd>Updates to work with <code>xcp-xapi</code> package on Ubuntu 12.04 and fix a number of other minor install problems. <em>Commits <a href="https://review.openstack.org/7673/">7673</a> and <a href="https://review.openstack.org/7449/">7449</a> merged 01 Jun 2012</em></dd>
-
- <dt>Enable Quantum multi-node</dt>
- <dd>Enable running Quantum agents on multiple nodes. <em>Commit <a href="https://review.openstack.org/7001">7001</a> merged 26 May 2012</em></dd>
-
- <dt>Reinitialize Swift data store</dt>
- <dd>Create a new XFS filesystem on the Swift data store on every <code>stack.sh</code> run. <em>Commit <a href="https://review.openstack.org/7554">7554</a> merged 22 May 2012</em></dd>
-
- <dt>Add support for Qpid</dt>
- <dd>Configure Qpid RPC by replacing <code>rabbit</code> with <code>qpid</code> in <code>ENABLED_SERVICES</code>. <em>Commit <a href="https://review.openstack.org/6501">6501</a> merged 17 May 2012</em></dd>
-
- <dt>Glance uses Swift if enabled</dt>
- <dd>If Swift is enabled Glance will store images there by default. <em>Commit <a href="https://review.openstack.org/7277">7277</a> merged 15 May 2012</em></dd>
-
- <dt>Add Quantum linuxbridge support</dt>
- <dd>Support using linuxbridge in Quantum. <em>Commit <a href="https://review.openstack.org/7300">7300</a> merged 10 May 2012</em></dd>
-
- <dt>Change Nova volume name template</dt>
- <dd>Change Nova's <code>volume_name_template</code> to <code>${VOLUME_NAME_PREFIX}%s</code>. <em>Commit <a href="https://review.openstack.org/7004">7004</a> merged 02 May 2012</em></dd>
-
- <dt>Change MySQL engine default</dt>
- <dd>Use InnoDB engine in MySQL by default. <em>Commit <a href="https://review.openstack.org/6185">6185</a> merged 30 Apr 2012</em></dd>
-
- <dt>Add Glance client</dt>
- <dd>Add new <code>python-glanceclient</code> to override the clinet in the Glance repo. <em>Config <a href="https://review.openstack.org/6533">6533</a> merged 26 Apr 2012</em></dd>
- </dl>
+ <!-- Begin git log %GIT_LOG% -->
+ <!-- End git log -->
+ </ul>
</div>
</section>
diff --git a/docs/source/configuration.html b/docs/source/configuration.html
index c26aee4..fbcead7 100644
--- a/docs/source/configuration.html
+++ b/docs/source/configuration.html
@@ -58,7 +58,7 @@
<h3>local.conf</h3>
<p>The new configuration file is <code>local.conf</code> and resides in the root DevStack directory like the old <code>localrc</code> file. It is a modified INI format file that introduces a meta-section header to carry additional information regarding the configuration files to be changed.</p>
- <p>The new header is similar to a normal INI section header but with two '[[ ]]' chars and two internal fields separated by a pipe ('|'):</p>
+ <p>The new header is similar to a normal INI section header but with double brackets (<code>[[ ... ]]</code>) and two internal fields separated by a pipe (<code>|</code>):</p>
<pre>[[ <phase> | <config-file-name> ]]
</pre>
@@ -67,6 +67,8 @@
<p>The defined phases are:</p>
<ul>
<li><strong>local</strong> - extracts <code>localrc</code> from <code>local.conf</code> before <code>stackrc</code> is sourced</li>
+ <li><strong>pre-install</strong> - runs after the system packages are installed but before any of the source repositories are installed</li>
+ <li><strong>install</strong> - runs immediately after the repo installations are complete</li>
<li><strong>post-config</strong> - runs after the layer 2 services are configured and before they are started</li>
<li><strong>extra</strong> - runs after services are started and before any files in <code>extra.d</code> are executed
</ul>
@@ -96,7 +98,7 @@
<pre>[[post-config|/$Q_PLUGIN_CONF_FILE]]
</pre>
- <p>The existing ``EXTRAS_OPTS`` and similar variables are now deprecated. If used a warning will be printed at the end of the <code>stack.sh</code> run.</p>
+ <p>Also note that the <code>localrc</code> section is sourced as a shell script fragment amd <string>MUST</strong> conform to the shell requirements, specifically no whitespace around <code>=</code> (equals).</p>
<a id="minimal"></a>
<h3>Minimal Configuration</h3>
@@ -205,14 +207,6 @@
<h3>Examples</h3>
<ul>
- <li>Convert EXTRA_OPTS from (<code>localrc</code>):
-<pre>EXTRA_OPTS=api_rate_limit=False
-</pre>
- to (<code>local.conf</code>):
-<pre>[[post-config|$NOVA_CONF]]
-[DEFAULT]
-api_rate_limit = False
-</pre></li>
<li>Eliminate a Cinder pass-through (<code>CINDER_PERIODIC_INTERVAL</code>):
<pre>[[post-config|$CINDER_CONF]]
[DEFAULT]
diff --git a/docs/source/contributing.html b/docs/source/contributing.html
index 8dbd179..f3d4b5a 100644
--- a/docs/source/contributing.html
+++ b/docs/source/contributing.html
@@ -59,20 +59,21 @@
<br /><strong>HACKING.rst</strong>
<p>Like most OpenStack projects, DevStack includes a <code>HACKING.rst</code> file that describes the layout, style and conventions of the project. Because <code>HACKING.rst</code> is in the main DevStack repo it is considered authoritative. Much of the content on this page is taken from there.</p>
- <br /><strong>bash8 Formatting</strong>
- <p>Around the time of the OpenStack Havana release we added a tool to do style checking in DevStack similar to what pep8/flake8 do for Python projects. It is still _very_ simplistic, focusing mostly on stray whitespace to help prevent -1 on reviews that are otherwise acceptable. Oddly enough it is called <code>bash8</code>. It will be expanded to enforce some of the documentation rules in comments that are used in formatting the script pages for devstack.org and possibly even simple code formatting. Run it on the entire project with <code>./run_tests.sh</code>.</p>
+ <br /><strong>bashate Formatting</strong>
+ <p>Around the time of the OpenStack Havana release we added a tool to do style checking in DevStack similar to what pep8/flake8 do for Python projects. It is still _very_ simplistic, focusing mostly on stray whitespace to help prevent -1 on reviews that are otherwise acceptable. Oddly enough it is called <code>bashate</code>. It will be expanded to enforce some of the documentation rules in comments that are used in formatting the script pages for devstack.org and possibly even simple code formatting. Run it on the entire project with <code>./run_tests.sh</code>.</p>
<h3>Code</h3>
<br /><strong>Repo Layout</strong>
<p>The DevStack repo generally keeps all of the primary scripts at the root level.</p>
- <p><code>exercises</code> - contains the test scripts used to validate and demonstrate some OpenStack functions. These scripts know how to exit early or skip services that are not enabled.</p>
- <p><code>extras.d</code> - contains the dispatch scripts called by the hooks in <code>stack.sh</code>, <code>unstack.sh</code> and <code>clean.sh</code>. See <a href="plugins.html">the plugins docs</a> for more information.</p>
- <p><code>files</code> - contains a variety of otherwise lost files used in configuring and operating DevStack. This includes templates for configuration files and the system dependency information. This is also where image files are downloaded and expanded if necessary.</p>
- <p><code>lib</code> - contains the sub-scripts specific to each project. This is where the work of managing a project's services is located. Each top-level project (Keystone, Nova, etc) has a file here. Additionally there are some for system services and project plugins.</p>
- <p><code>samples</code> - contains a sample of the local files not included in the DevStack repo.</p>
+ <p><code>docs</code> - Contains the source for this website. It is built using <code>tools/build_docs.sh</code>.</p>
+ <p><code>exercises</code> - Contains the test scripts used to validate and demonstrate some OpenStack functions. These scripts know how to exit early or skip services that are not enabled.</p>
+ <p><code>extras.d</code> - Contains the dispatch scripts called by the hooks in <code>stack.sh</code>, <code>unstack.sh</code> and <code>clean.sh</code>. See <a href="plugins.html">the plugins docs</a> for more information.</p>
+ <p><code>files</code> - Contains a variety of otherwise lost files used in configuring and operating DevStack. This includes templates for configuration files and the system dependency information. This is also where image files are downloaded and expanded if necessary.</p>
+ <p><code>lib</code> - Contains the sub-scripts specific to each project. This is where the work of managing a project's services is located. Each top-level project (Keystone, Nova, etc) has a file here. Additionally there are some for system services and project plugins.</p>
+ <p><code>samples</code> - Contains a sample of the local files not included in the DevStack repo.</p>
<p><code>tests</code> - the DevStack test suite is rather sparse, mostly consisting of test of specific fragile functions in the <code>functions</code> file.</p>
- <p><code>tools</code> - contains a collection of stand-alone scripts, some of which have aged a bit (does anyone still do pamdisk installs?). While these may reference the top-level DevStack configuration they can generally be run alone. There are also some sub-directories to support specific environments such as XenServer and Docker.</p>
+ <p><code>tools</code> - Contains a collection of stand-alone scripts, some of which have aged a bit (does anyone still do ramdisk installs?). While these may reference the top-level DevStack configuration they can generally be run alone. There are also some sub-directories to support specific environments such as XenServer.</p>
diff --git a/docs/source/faq.html b/docs/source/faq.html
index 4867eb4..bfac1dc 100644
--- a/docs/source/faq.html
+++ b/docs/source/faq.html
@@ -18,7 +18,7 @@
body { padding-top: 60px; }
dd { padding: 10px; }
</style>
-
+
<!-- Le javascripts -->
<script src="assets/js/jquery-1.7.1.min.js" type="text/javascript" charset="utf-8"></script>
<script src="assets/js/bootstrap.js" type="text/javascript" charset="utf-8"></script>
@@ -42,7 +42,7 @@
</div>
<div class="container" id="home">
-
+
<section id="faq" class="span12">
<div class='row pull-left'>
@@ -68,16 +68,16 @@
<dt>Q: Why a shell script, why not chef/puppet/...</dt>
<dd>A: The script is meant to be read by humans (as well as ran by computers); it is the primary documentation after all. Using a recipe system requires everyone to agree and understand chef or puppet.</dd>
-
+
<dt>Q: Why not use Crowbar?</dt>
<dd>A: DevStack is optimized for documentation & developers. As some of us use <a href="https://github.com/dellcloudedge/crowbar">Crowbar</a> for production deployments, we hope developers documenting how they setup systems for new features supports projects like Crowbar.</dd>
-
+
<dt>Q: I'd like to help!</dt>
- <dd>A: That isn't a question, but please do! The source for DevStack is <a href="http://github.com/openstack-dev/devstack">github</a> and bug reports go to <a href="http://bugs.launchpad.net/devstack/">LaunchPad</a>. Contributions follow the usual process as described in the <a href="http://wiki.openstack.org/HowToContribute">OpenStack wiki</a> even though DevStack is not an official OpenStack project. This site is housed in the CloudBuilder's <a href="http://github.com/cloudbuilders/devstack">github</a> in the gh-pages branch.</dd>
-
+ <dd>A: That isn't a question, but please do! The source for DevStack is <a href="http://github.com/openstack-dev/devstack">github</a> and bug reports go to <a href="http://bugs.launchpad.net/devstack/">LaunchPad</a>. Contributions follow the usual process as described in the <a href="http://wiki.openstack.org/HowToContribute">OpenStack wiki</a>. DevStack is not a core project but a gating project and therefore an official OpenStack project. This site is housed in the CloudBuilder's <a href="http://github.com/cloudbuilders/devstack">github</a> in the gh-pages branch.</dd>
+
<dt>Q: Why not use packages?</dt>
<dd>A: Unlike packages, DevStack leaves your cloud ready to develop - checkouts of the code and services running in screen. However, many people are doing the hard work of packaging and recipes for production deployments. We hope this script serves as a way to communicate configuration changes between developers and packagers.</dd>
-
+
<dt>Q: Why isn't $MY_FAVORITE_DISTRO supported?</dt>
<dd>A: DevStack is meant for developers and those who want to see how OpenStack really works. DevStack is known to run on the distro/release combinations listed in <code>README.md</code>. DevStack is only supported on releases other than those documented in <code>README.md</code> on a best-effort basis.</dd>
@@ -96,7 +96,7 @@
<dl class='pull-left'>
<dt>Q: Can DevStack handle a multi-node installation?</dt>
<dd>A: Indirectly, yes. You run DevStack on each node with the appropriate configuration in <code>local.conf</code>. The primary considerations are turning off the services not required on the secondary nodes, making sure the passwords match and setting the various API URLs to the right place.</dd>
-
+
<dt>Q: How can I document the environment that DevStack is using?</dt>
<dd>A: DevStack includes a script (<code>tools/info.sh</code>) that gathers the versions of the relevant installed apt packages, pip packages and git repos. This is a good way to verify what Python modules are installed.</dd>
diff --git a/docs/source/guides/multinode-lab.html b/docs/source/guides/multinode-lab.html
index 28a6585..2e52379 100644
--- a/docs/source/guides/multinode-lab.html
+++ b/docs/source/guides/multinode-lab.html
@@ -54,7 +54,7 @@
</div>
<h3>Minimal Install</h3>
- <p>You need to have a fresh install of Linux on all of your nodes. You can download the <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Minimal CD</a> for Ubuntu 12.04 (only 27MB) since DevStack will download & install all the additional dependencies. The netinstall ISO is available for <a href="http://mirrors.kernel.org/fedora/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-netinst.iso">Fedora</a> and <a href="http://mirrors.kernel.org/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-netinstall.iso">CentOS/RHEL</a>.</p>
+ <p>You need to have a system with a fresh install of Linux. You can download the <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Minimal CD</a> for Ubuntu releases since DevStack will download & install all the additional dependencies. The netinstall ISO is available for <a href="http://mirrors.kernel.org/fedora/releases/18/Fedora/x86_64/iso/Fedora-20-x86_64-netinst.iso">Fedora</a> and <a href="http://mirrors.kernel.org/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-netinstall.iso">CentOS/RHEL</a>.</p>
<p>Install a couple of packages to bootstrap configuration:</p>
<pre>apt-get install -y git sudo || yum install -y git sudo</pre>
diff --git a/docs/source/guides/single-machine.html b/docs/source/guides/single-machine.html
index 2280793..ca9cafa 100644
--- a/docs/source/guides/single-machine.html
+++ b/docs/source/guides/single-machine.html
@@ -53,7 +53,7 @@
</div>
<h3>Minimal Install</h3>
- <p>You need to have a system with a fresh install of Linux. You can download the <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Minimal CD</a> for Ubuntu 12.04 (only 27MB) since DevStack will download & install all the additional dependencies. The netinstall ISO is available for <a href="http://mirrors.kernel.org/fedora/releases/18/Fedora/x86_64/iso/Fedora-18-x86_64-netinst.iso">Fedora</a> and <a href="http://mirrors.kernel.org/centos/6.4/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso">CentOS/RHEL</a>. You may be tempted to use a desktop distro on a laptop, it will probably work but you may need to tell Network Manager to keep its fingers off the interface(s) that OpenStack uses for bridging.</p>
+ <p>You need to have a system with a fresh install of Linux. You can download the <a href="https://help.ubuntu.com/community/Installation/MinimalCD">Minimal CD</a> for Ubuntu releases since DevStack will download & install all the additional dependencies. The netinstall ISO is available for <a href="http://mirrors.kernel.org/fedora/releases/18/Fedora/x86_64/iso/Fedora-20-x86_64-netinst.iso">Fedora</a> and <a href="http://mirrors.kernel.org/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-netinstall.iso">CentOS/RHEL</a>. You may be tempted to use a desktop distro on a laptop, it will probably work but you may need to tell Network Manager to keep its fingers off the interface(s) that OpenStack uses for bridging.</p>
<h3>Network Configuration</h3>
<p>Determine the network configuration on the interface used to integrate your
diff --git a/docs/source/index.html b/docs/source/index.html
index 71c8c98..1a31df1 100644
--- a/docs/source/index.html
+++ b/docs/source/index.html
@@ -76,7 +76,7 @@
<ol>
<li value="0">
<h3>Select a Linux Distribution</h3>
- <p>Only Ubuntu 12.04 (Precise), Fedora 20 and CentOS/RHEL 6.5 are documented here. OpenStack also runs and is packaged on other flavors of Linux such as OpenSUSE and Debian.</p>
+ <p>Only Ubuntu 14.04 (Trusty), Fedora 20 and CentOS/RHEL 6.5 are documented here. OpenStack also runs and is packaged on other flavors of Linux such as OpenSUSE and Debian.</p>
</li>
<li>
<h3>Install Selected OS</h3>
@@ -89,7 +89,7 @@
</li>
<li>
<h3>Configure</h3>
- <p>While optional, we recommend a <a href="configuration.html">minimal configuration</a> be set up as you may not want our default values for everything.</p>
+ <p>We recommend at least a <a href="configuration.html">minimal configuration</a> be set up.</p>
</li>
<li>
<h3>Start the install</h3>
@@ -231,6 +231,10 @@
<td><a href="functions.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
+ <td>functions-common</td>
+ <td><a href="functions-common.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
<td>lib/apache</td>
<td><a href="lib/apache.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
@@ -303,12 +307,12 @@
<td><a href="lib/rpc_backend.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
- <td>lib/savanna</td>
- <td><a href="lib/savanna.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ <td>lib/sahara</td>
+ <td><a href="lib/sahara.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
- <td>lib/savanna-dashboard</td>
- <td><a href="lib/savanna-dashboard.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ <td>lib/savanna</td>
+ <td><a href="lib/savanna.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
<td>lib/stackforge</td>
@@ -343,49 +347,34 @@
<td><a href="run_tests.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
+ <td>extras.d/50-ironic.sh</td>
+ <td><a href="extras.d/50-ironic.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
<td>extras.d/70-marconi.sh</td>
<td><a href="extras.d/70-marconi.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
+ <td>extras.d/70-sahara.sh</td>
+ <td><a href="extras.d/70-sahara.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
<td>extras.d/70-savanna.sh</td>
<td><a href="extras.d/70-savanna.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
<tr>
+ <td>extras.d/70-trove.sh</td>
+ <td><a href="extras.d/70-trove.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>extras.d/80-opendaylight.sh</td>
+ <td><a href="extras.d/80-opendaylight.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
<td>extras.d/80-tempest.sh</td>
<td><a href="extras.d/80-tempest.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
- <tr>
- <td>tools/info.sh</td>
- <td><a href="tools/info.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/build_docs.sh</td>
- <td><a href="tools/build_docs.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/create_userrc.sh</td>
- <td><a href="tools/create_userrc.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/fixup_stuff.sh</td>
- <td><a href="tools/fixup_stuff.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/install_prereqs.sh</td>
- <td><a href="tools/install_prereqs.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/install_pip.sh</td>
- <td><a href="tools/install_pip.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
- <tr>
- <td>tools/upload_image.sh</td>
- <td><a href="tools/upload_image.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
- </tr>
</tbody>
- <tfoot>
- <td colspan="3">40 bash scripts</td>
- </tfoot>
</table>
</div>
@@ -420,9 +409,46 @@
<td><a href="eucarc.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
</tbody>
- <tfoot>
- <td colspan="3">5 configuration files</td>
- </tfoot>
+ </table>
+
+ <h2>Tools <small>Support scripts</small></h2>
+ <table class='table table-striped table-bordered'>
+ <thead>
+ <tr>
+ <th>Filename</th>
+ <th>Link</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>tools/info.sh</td>
+ <td><a href="tools/info.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/build_docs.sh</td>
+ <td><a href="tools/build_docs.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/create_userrc.sh</td>
+ <td><a href="tools/create_userrc.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/fixup_stuff.sh</td>
+ <td><a href="tools/fixup_stuff.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/install_prereqs.sh</td>
+ <td><a href="tools/install_prereqs.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/install_pip.sh</td>
+ <td><a href="tools/install_pip.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <tr>
+ <td>tools/upload_image.sh</td>
+ <td><a href="tools/upload_image.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ </tbody>
</table>
<h2>Samples <small>Generated documentation of DevStack sample files.</small></h2>
@@ -443,9 +469,6 @@
<td><a href="samples/localrc.html" class="btn btn-small btn-success table-action">Read »</a></td>
</tr>
</tbody>
- <tfoot>
- <td colspan="3">2 sample files</td>
- </tfoot>
</table>
<div class='row span5 pull-right'>
@@ -494,10 +517,19 @@
<td>exercises/horizon.sh</td>
<td><a href="exercises/horizon.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
+ <td>exercises/marconi.sh</td>
+ <td><a href="exercises/marconi.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
<tr>
<td>exercises/neutron-adv-test.sh</td>
<td><a href="exercises/neutron-adv-test.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
+ <td>exercises/sahara.sh</td>
+ <td><a href="exercises/sahara.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
+ <td>exercises/savanna.sh</td>
+ <td><a href="exercises/savanna.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
<tr>
<td>exercises/sec_groups.sh</td>
<td><a href="exercises/sec_groups.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
@@ -506,14 +538,14 @@
<td>exercises/swift.sh</td>
<td><a href="exercises/swift.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
+ <td>exercises/trove.sh</td>
+ <td><a href="exercises/trove.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
+ </tr>
<tr>
<td>exercises/volumes.sh</td>
<td><a href="exercises/volumes.sh.html" class="btn btn-small btn-primary table-action">Read »</a></td>
</tr>
</tbody>
- <tfoot>
- <td colspan="3">13 exercise scripts</td>
- </tfoot>
</table>
</div>
diff --git a/docs/source/overview.html b/docs/source/overview.html
index c0b6ea2..baee400 100644
--- a/docs/source/overview.html
+++ b/docs/source/overview.html
@@ -47,8 +47,8 @@
<div class='row pull-left'>
<h2>Overview <small>DevStack from a cloud-height view</small></h2>
- <p>DevStack is not and has never been intended to be a general OpenStack installer. It has evolved to support a large number of configuration options and alternative platforms and support services. However, that evolution has grown well beyond what was originally intended and the majority of configuration combinations are rarely, if ever, tested. DevStack was never meant to be everything to everyone and can not continue in that direction.</p>
- <p>Below is a list of what is specifically is supported (read that as "tested and assumed to work") going forward.</p>
+ <p>DevStack has evolved to support a large number of configuration options and alternative platforms and support services. That evolution has grown well beyond what was originally intended and the majority of configuration combinations are rarely, if ever, tested. DevStack is not a general OpenStack installer and was never meant to be everything to everyone..</p>
+ <p>Below is a list of what is specifically is supported (read that as "tested") going forward.</p>
<h2>Supported Components</h2>
@@ -93,7 +93,7 @@
</ul>
<h3>Services</h3>
- <p>The default services configured by DevStack are Identity (Keystone), Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder), Compute (Nova), Network (Nova), Dashboard (Horizon)</p>
+ <p>The default services configured by DevStack are Identity (Keystone), Object Storage (Swift), Image Storage (Glance), Block Storage (Cinder), Compute (Nova), Network (Nova), Dashboard (Horizon), Orchestration (Heat)</p>
<p>Additional services not included directly in DevStack can be tied in to <code>stack.sh</code> using the <a href="plugins.html">plugin mechanism</a> to call scripts that perform the configuration and startup of the service.</p>
<h3>Node Configurations</h3>
@@ -103,7 +103,7 @@
</ul>
<h3>Exercises</h3>
- <p>The DevStack exercise scripts have been replaced as integration and gating test with Tempest. They will continue to be maintained as they are valuable as demonstrations of using OpenStack from the command line and for quick operational testing.</p>
+ <p>The DevStack exercise scripts are no longer used as integration and gate testing as that job has transitioned to Tempest. They are still maintained as a demonstrations of using OpenStack from the command line and for quick operational testing.</p>
</div>
diff --git a/docs/source/plugins.html b/docs/source/plugins.html
index 85cf8e4..3327128 100644
--- a/docs/source/plugins.html
+++ b/docs/source/plugins.html
@@ -67,7 +67,12 @@
source $TOP_DIR/lib/template
fi
- if [[ "$1" == "stack" && "$2" == "install" ]]; then
+ if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
+ # Set up system services
+ echo_summary "Configuring system services Template"
+ install_package cowsay
+
+ elif [[ "$1" == "stack" && "$2" == "install" ]]; then
# Perform installation of service source
echo_summary "Installing Template"
install_template
@@ -103,6 +108,7 @@
<li><strong>source</strong> - Called by each script that utilizes <code>extras.d</code> hooks; this replaces directly sourcing the <code>lib/*</code> script.</li>
<li><strong>stack</strong> - Called by <code>stack.sh</code> three times for different phases of its run:
<ul>
+ <li><strong>pre-install</strong> - Called after system (OS) setup is complete and before project source is installed.</li>
<li><strong>install</strong> - Called after the layer 1 and 2 projects source and their dependencies have been installed.</li>
<li><strong>post-config</strong> - Called after the layer 1 and 2 services have been configured. All configuration files for enabled services should exist at this point.</li>
<li><strong>extra</strong> - Called near the end after layer 1 and 2 services have been started. This is the existing hook and has not otherwise changed.</li>
diff --git a/exercises/trove.sh b/exercises/trove.sh
index d48d5fe..053f872 100755
--- a/exercises/trove.sh
+++ b/exercises/trove.sh
@@ -35,8 +35,12 @@
is_service_enabled trove || exit 55
-# can we get a list versions
-curl http://$SERVICE_HOST:8779/ 2>/dev/null | grep -q 'versions' || die $LINENO "Trove API not functioning!"
+# can try to get datastore id
+DSTORE_ID=$(trove datastore-list | tail -n +4 |head -3 | get_field 1)
+die_if_not_set $LINENO DSTORE_ID "Trove API not functioning!"
+
+DV_ID=$(trove datastore-version-list $DSTORE_ID | tail -n +4 | get_field 1)
+die_if_not_set $LINENO DV_ID "Trove API not functioning!"
set +o xtrace
echo "*********************************************************************"
diff --git a/extras.d/60-ceph.sh b/extras.d/60-ceph.sh
new file mode 100644
index 0000000..5fb34ea
--- /dev/null
+++ b/extras.d/60-ceph.sh
@@ -0,0 +1,44 @@
+# ceph.sh - DevStack extras script to install Ceph
+
+if is_service_enabled ceph; then
+ if [[ "$1" == "source" ]]; then
+ # Initial source
+ source $TOP_DIR/lib/ceph
+ elif [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
+ echo_summary "Installing Ceph"
+ install_ceph
+ echo_summary "Configuring Ceph"
+ configure_ceph
+ # NOTE (leseb): Do everything here because we need to have Ceph started before the main
+ # OpenStack components. Ceph OSD must start here otherwise we can't upload any images.
+ echo_summary "Initializing Ceph"
+ init_ceph
+ start_ceph
+ elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
+ if is_service_enabled glance; then
+ echo_summary "Configuring Glance for Ceph"
+ configure_ceph_glance
+ fi
+ if is_service_enabled nova; then
+ echo_summary "Configuring Nova for Ceph"
+ configure_ceph_nova
+ fi
+ if is_service_enabled cinder; then
+ echo_summary "Configuring Cinder for Ceph"
+ configure_ceph_cinder
+ # NOTE (leseb): the part below is a requirement from Cinder in order to attach volumes
+ # so we should run the following within the if statement.
+ echo_summary "Configuring libvirt secret"
+ import_libvirt_secret_ceph
+ fi
+ fi
+
+ if [[ "$1" == "unstack" ]]; then
+ stop_ceph
+ cleanup_ceph
+ fi
+
+ if [[ "$1" == "clean" ]]; then
+ cleanup_ceph
+ fi
+fi
diff --git a/extras.d/README.md b/extras.d/README.md
index 1dd17da..7c2e4fe 100644
--- a/extras.d/README.md
+++ b/extras.d/README.md
@@ -22,9 +22,24 @@
stack: called by stack.sh. There are four possible values for
the second arg to distinguish the phase stack.sh is in:
- arg 2: install | post-config | extra | post-extra
+ arg 2: pre-install | install | post-config | extra
unstack: called by unstack.sh
clean: called by clean.sh. Remember, clean.sh also calls unstack.sh
so that work need not be repeated.
+
+The `stack` phase sub-phases are called from `stack.sh` in the following places:
+
+ pre-install - After all system prerequisites have been installed but before any
+ DevStack-specific services are installed (including database and rpc).
+
+ install - After all OpenStack services have been installed and configured
+ but before any OpenStack services have been started. Changes to OpenStack
+ service configurations should be done here.
+
+ post-config - After OpenStack services have been initialized but still before
+ they have been started. (This is probably mis-named, think of it as post-init.)
+
+ extra - After everything is started.
+
diff --git a/files/apache-keystone.template b/files/apache-keystone.template
index 919452a..805e7b8 100644
--- a/files/apache-keystone.template
+++ b/files/apache-keystone.template
@@ -20,3 +20,7 @@
LogLevel debug
CustomLog /var/log/%APACHE_NAME%/access.log combined
</VirtualHost>
+
+# Workaround for missing path on RHEL6, see
+# https://bugzilla.redhat.com/show_bug.cgi?id=1121019
+WSGISocketPrefix /var/run/%APACHE_NAME%
diff --git a/files/apts/ceph b/files/apts/ceph
new file mode 100644
index 0000000..69863ab
--- /dev/null
+++ b/files/apts/ceph
@@ -0,0 +1,2 @@
+ceph # NOPRIME
+xfsprogs
diff --git a/files/apts/general b/files/apts/general
index 90529e5..739fc47 100644
--- a/files/apts/general
+++ b/files/apts/general
@@ -20,6 +20,9 @@
tar
python-cmd2 # dist:precise
python-dev
+python-mock # testonly
python2.7
bc
libyaml-dev
+libffi-dev
+libssl-dev # for pyOpenSSL
diff --git a/files/apts/glance b/files/apts/glance
index b5d8c77..15e09aa 100644
--- a/files/apts/glance
+++ b/files/apts/glance
@@ -1,4 +1,3 @@
-libffi-dev
libmysqlclient-dev # testonly
libpq-dev # testonly
libssl-dev # testonly
diff --git a/files/apts/heat b/files/apts/heat
new file mode 100644
index 0000000..1ecbc78
--- /dev/null
+++ b/files/apts/heat
@@ -0,0 +1 @@
+gettext # dist:trusty
diff --git a/files/apts/nova b/files/apts/nova
index 38c99c7..e779849 100644
--- a/files/apts/nova
+++ b/files/apts/nova
@@ -1,5 +1,6 @@
dnsmasq-base
dnsmasq-utils # for dhcp_release
+conntrack
kpartx
parted
iputils-arping
diff --git a/files/apts/swift b/files/apts/swift
index 080ecdb..fd51699 100644
--- a/files/apts/swift
+++ b/files/apts/swift
@@ -1,5 +1,4 @@
curl
-libffi-dev
memcached
python-configobj
python-coverage
diff --git a/files/rpms-suse/ceph b/files/rpms-suse/ceph
new file mode 100644
index 0000000..8d46500
--- /dev/null
+++ b/files/rpms-suse/ceph
@@ -0,0 +1,3 @@
+ceph # NOPRIME
+xfsprogs
+lsb
diff --git a/files/rpms-suse/nova b/files/rpms-suse/nova
index 3e95724..7a1160e 100644
--- a/files/rpms-suse/nova
+++ b/files/rpms-suse/nova
@@ -1,6 +1,7 @@
curl
dnsmasq
dnsmasq-utils # dist:opensuse-12.3,opensuse-13.1
+conntrack-tools
ebtables
gawk
genisoimage # required for config_drive
diff --git a/files/rpms/ceph b/files/rpms/ceph
new file mode 100644
index 0000000..5483735
--- /dev/null
+++ b/files/rpms/ceph
@@ -0,0 +1,3 @@
+ceph # NOPRIME
+xfsprogs
+redhat-lsb-core
diff --git a/files/rpms/general b/files/rpms/general
index a0074dd..74997a8 100644
--- a/files/rpms/general
+++ b/files/rpms/general
@@ -7,6 +7,7 @@
openssh-server
openssl
openssl-devel # to rebuild pyOpenSSL if needed
+libffi-devel
libxml2-devel
libxslt-devel
psmisc
diff --git a/files/rpms/glance b/files/rpms/glance
index fc07fa7..5a7f073 100644
--- a/files/rpms/glance
+++ b/files/rpms/glance
@@ -1,4 +1,3 @@
-libffi-devel
libxml2-devel # testonly
libxslt-devel # testonly
mysql-devel # testonly
diff --git a/files/rpms/neutron b/files/rpms/neutron
index 9fafecb..15ed973 100644
--- a/files/rpms/neutron
+++ b/files/rpms/neutron
@@ -1,4 +1,5 @@
MySQL-python
+dnsmasq # for q-dhcp
dnsmasq-utils # for dhcp_release
ebtables
iptables
diff --git a/files/rpms/nova b/files/rpms/nova
index e05d0d7..6097991 100644
--- a/files/rpms/nova
+++ b/files/rpms/nova
@@ -1,6 +1,8 @@
MySQL-python
curl
+dnsmasq # for nova-network
dnsmasq-utils # for dhcp_release
+conntrack-tools
ebtables
gawk
genisoimage # required for config_drive
diff --git a/files/rpms/swift b/files/rpms/swift
index 938d2c8..9ec4aab 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -1,5 +1,4 @@
curl
-libffi-devel
memcached
python-configobj
python-coverage
diff --git a/functions b/functions
index ca8ef80..1fa6346 100644
--- a/functions
+++ b/functions
@@ -55,26 +55,28 @@
local image_url=$1
local token=$2
+ local image image_fname image_name
+
# Create a directory for the downloaded image tarballs.
mkdir -p $FILES/images
- IMAGE_FNAME=`basename "$image_url"`
+ image_fname=`basename "$image_url"`
if [[ $image_url != file* ]]; then
# Downloads the image (uec ami+akistyle), then extracts it.
- if [[ ! -f $FILES/$IMAGE_FNAME || "$(stat -c "%s" $FILES/$IMAGE_FNAME)" = "0" ]]; then
- wget -c $image_url -O $FILES/$IMAGE_FNAME
+ if [[ ! -f $FILES/$image_fname || "$(stat -c "%s" $FILES/$image_fname)" = "0" ]]; then
+ wget -c $image_url -O $FILES/$image_fname
if [[ $? -ne 0 ]]; then
echo "Not found: $image_url"
return
fi
fi
- IMAGE="$FILES/${IMAGE_FNAME}"
+ image="$FILES/${image_fname}"
else
# File based URL (RFC 1738): file://host/path
# Remote files are not considered here.
# *nix: file:///home/user/path/file
# windows: file:///C:/Documents%20and%20Settings/user/path/file
- IMAGE=$(echo $image_url | sed "s/^file:\/\///g")
- if [[ ! -f $IMAGE || "$(stat -c "%s" $IMAGE)" == "0" ]]; then
+ image=$(echo $image_url | sed "s/^file:\/\///g")
+ if [[ ! -f $image || "$(stat -c "%s" $image)" == "0" ]]; then
echo "Not found: $image_url"
return
fi
@@ -82,14 +84,14 @@
# OpenVZ-format images are provided as .tar.gz, but not decompressed prior to loading
if [[ "$image_url" =~ 'openvz' ]]; then
- IMAGE_NAME="${IMAGE_FNAME%.tar.gz}"
- glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" --is-public=True --container-format ami --disk-format ami < "${IMAGE}"
+ image_name="${image_fname%.tar.gz}"
+ glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name" --is-public=True --container-format ami --disk-format ami < "${image}"
return
fi
# vmdk format images
if [[ "$image_url" =~ '.vmdk' ]]; then
- IMAGE_NAME="${IMAGE_FNAME%.vmdk}"
+ image_name="${image_fname%.vmdk}"
# Before we can upload vmdk type images to glance, we need to know it's
# disk type, storage adapter, and networking adapter. These values are
@@ -102,17 +104,17 @@
# If the filename does not follow the above format then the vsphere
# driver will supply default values.
- vmdk_adapter_type=""
- vmdk_disktype=""
- vmdk_net_adapter=""
+ local vmdk_disktype=""
+ local vmdk_net_adapter=""
+ local path_len
# vmdk adapter type
- vmdk_adapter_type="$(head -25 $IMAGE | { grep -a -F -m 1 'ddb.adapterType =' $IMAGE || true; })"
+ local vmdk_adapter_type="$(head -25 $image | { grep -a -F -m 1 'ddb.adapterType =' $image || true; })"
vmdk_adapter_type="${vmdk_adapter_type#*\"}"
vmdk_adapter_type="${vmdk_adapter_type%?}"
# vmdk disk type
- vmdk_create_type="$(head -25 $IMAGE | { grep -a -F -m 1 'createType=' $IMAGE || true; })"
+ local vmdk_create_type="$(head -25 $image | { grep -a -F -m 1 'createType=' $image || true; })"
vmdk_create_type="${vmdk_create_type#*\"}"
vmdk_create_type="${vmdk_create_type%\"*}"
@@ -120,17 +122,16 @@
`"should use a descriptor-data pair."
if [[ "$vmdk_create_type" = "monolithicSparse" ]]; then
vmdk_disktype="sparse"
- elif [[ "$vmdk_create_type" = "monolithicFlat" || \
- "$vmdk_create_type" = "vmfs" ]]; then
+ elif [[ "$vmdk_create_type" = "monolithicFlat" || "$vmdk_create_type" = "vmfs" ]]; then
# Attempt to retrieve the *-flat.vmdk
- flat_fname="$(head -25 $IMAGE | { grep -G 'RW\|RDONLY [0-9]+ FLAT\|VMFS' $IMAGE || true; })"
+ local flat_fname="$(head -25 $image | { grep -G 'RW\|RDONLY [0-9]+ FLAT\|VMFS' $image || true; })"
flat_fname="${flat_fname#*\"}"
flat_fname="${flat_fname%?}"
if [[ -z "$flat_fname" ]]; then
- flat_fname="$IMAGE_NAME-flat.vmdk"
+ flat_fname="$image_name-flat.vmdk"
fi
- path_len=`expr ${#image_url} - ${#IMAGE_FNAME}`
- flat_url="${image_url:0:$path_len}$flat_fname"
+ path_len=`expr ${#image_url} - ${#image_fname}`
+ local flat_url="${image_url:0:$path_len}$flat_fname"
warn $LINENO "$descriptor_data_pair_msg"`
`" Attempt to retrieve the *-flat.vmdk: $flat_url"
if [[ $flat_url != file* ]]; then
@@ -138,29 +139,29 @@
"$(stat -c "%s" $FILES/$flat_fname)" = "0" ]]; then
wget -c $flat_url -O $FILES/$flat_fname
fi
- IMAGE="$FILES/${flat_fname}"
+ image="$FILES/${flat_fname}"
else
- IMAGE=$(echo $flat_url | sed "s/^file:\/\///g")
- if [[ ! -f $IMAGE || "$(stat -c "%s" $IMAGE)" == "0" ]]; then
+ image=$(echo $flat_url | sed "s/^file:\/\///g")
+ if [[ ! -f $image || "$(stat -c "%s" $image)" == "0" ]]; then
echo "Flat disk not found: $flat_url"
return 1
fi
fi
- IMAGE_NAME="${flat_fname}"
+ image_name="${flat_fname}"
vmdk_disktype="preallocated"
elif [[ "$vmdk_create_type" = "streamOptimized" ]]; then
vmdk_disktype="streamOptimized"
elif [[ -z "$vmdk_create_type" ]]; then
# *-flat.vmdk provided: attempt to retrieve the descriptor (*.vmdk)
# to retrieve appropriate metadata
- if [[ ${IMAGE_NAME: -5} != "-flat" ]]; then
+ if [[ ${image_name: -5} != "-flat" ]]; then
warn $LINENO "Expected filename suffix: '-flat'."`
- `" Filename provided: ${IMAGE_NAME}"
+ `" Filename provided: ${image_name}"
else
- descriptor_fname="${IMAGE_NAME:0:${#IMAGE_NAME} - 5}.vmdk"
- path_len=`expr ${#image_url} - ${#IMAGE_FNAME}`
- flat_path="${image_url:0:$path_len}"
- descriptor_url=$flat_path$descriptor_fname
+ descriptor_fname="${image_name:0:${#image_name} - 5}.vmdk"
+ path_len=`expr ${#image_url} - ${#image_fname}`
+ local flat_path="${image_url:0:$path_len}"
+ local descriptor_url=$flat_path$descriptor_fname
warn $LINENO "$descriptor_data_pair_msg"`
`" Attempt to retrieve the descriptor *.vmdk: $descriptor_url"
if [[ $flat_path != file* ]]; then
@@ -189,35 +190,35 @@
# NOTE: For backwards compatibility reasons, colons may be used in place
# of semi-colons for property delimiters but they are not permitted
# characters in NTFS filesystems.
- property_string=`echo "$IMAGE_NAME" | { grep -oP '(?<=-)(?!.*-).*[:;].*[:;].*$' || true; }`
+ property_string=`echo "$image_name" | { grep -oP '(?<=-)(?!.*-).*[:;].*[:;].*$' || true; }`
IFS=':;' read -a props <<< "$property_string"
vmdk_disktype="${props[0]:-$vmdk_disktype}"
vmdk_adapter_type="${props[1]:-$vmdk_adapter_type}"
vmdk_net_adapter="${props[2]:-$vmdk_net_adapter}"
- glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" --is-public=True --container-format bare --disk-format vmdk --property vmware_disktype="$vmdk_disktype" --property vmware_adaptertype="$vmdk_adapter_type" --property hw_vif_model="$vmdk_net_adapter" < "${IMAGE}"
+ glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name" --is-public=True --container-format bare --disk-format vmdk --property vmware_disktype="$vmdk_disktype" --property vmware_adaptertype="$vmdk_adapter_type" --property hw_vif_model="$vmdk_net_adapter" < "${image}"
return
fi
# XenServer-vhd-ovf-format images are provided as .vhd.tgz
# and should not be decompressed prior to loading
if [[ "$image_url" =~ '.vhd.tgz' ]]; then
- IMAGE_NAME="${IMAGE_FNAME%.vhd.tgz}"
- FORCE_VM_MODE=""
- if [[ "$IMAGE_NAME" =~ 'cirros' ]]; then
+ image_name="${image_fname%.vhd.tgz}"
+ local force_vm_mode=""
+ if [[ "$image_name" =~ 'cirros' ]]; then
# Cirros VHD image currently only boots in PV mode.
# Nova defaults to PV for all VHD images, but
# the glance setting is needed for booting
# directly from volume.
- FORCE_VM_MODE="--property vm_mode=xen"
+ force_vm_mode="--property vm_mode=xen"
fi
glance \
--os-auth-token $token \
--os-image-url http://$GLANCE_HOSTPORT \
image-create \
- --name "$IMAGE_NAME" --is-public=True \
+ --name "$image_name" --is-public=True \
--container-format=ovf --disk-format=vhd \
- $FORCE_VM_MODE < "${IMAGE}"
+ $force_vm_mode < "${image}"
return
fi
@@ -225,93 +226,94 @@
# and should not be decompressed prior to loading.
# Setting metadata, so PV mode is used.
if [[ "$image_url" =~ '.xen-raw.tgz' ]]; then
- IMAGE_NAME="${IMAGE_FNAME%.xen-raw.tgz}"
+ image_name="${image_fname%.xen-raw.tgz}"
glance \
--os-auth-token $token \
--os-image-url http://$GLANCE_HOSTPORT \
image-create \
- --name "$IMAGE_NAME" --is-public=True \
+ --name "$image_name" --is-public=True \
--container-format=tgz --disk-format=raw \
- --property vm_mode=xen < "${IMAGE}"
+ --property vm_mode=xen < "${image}"
return
fi
- KERNEL=""
- RAMDISK=""
- DISK_FORMAT=""
- CONTAINER_FORMAT=""
- UNPACK=""
- case "$IMAGE_FNAME" in
+ local kernel=""
+ local ramdisk=""
+ local disk_format=""
+ local container_format=""
+ local unpack=""
+ local img_property=""
+ case "$image_fname" in
*.tar.gz|*.tgz)
# Extract ami and aki files
- [ "${IMAGE_FNAME%.tar.gz}" != "$IMAGE_FNAME" ] &&
- IMAGE_NAME="${IMAGE_FNAME%.tar.gz}" ||
- IMAGE_NAME="${IMAGE_FNAME%.tgz}"
- xdir="$FILES/images/$IMAGE_NAME"
+ [ "${image_fname%.tar.gz}" != "$image_fname" ] &&
+ image_name="${image_fname%.tar.gz}" ||
+ image_name="${image_fname%.tgz}"
+ local xdir="$FILES/images/$image_name"
rm -Rf "$xdir";
mkdir "$xdir"
- tar -zxf $IMAGE -C "$xdir"
- KERNEL=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
+ tar -zxf $image -C "$xdir"
+ kernel=$(for f in "$xdir/"*-vmlinuz* "$xdir/"aki-*/image; do
[ -f "$f" ] && echo "$f" && break; done; true)
- RAMDISK=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
+ ramdisk=$(for f in "$xdir/"*-initrd* "$xdir/"ari-*/image; do
[ -f "$f" ] && echo "$f" && break; done; true)
- IMAGE=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
+ image=$(for f in "$xdir/"*.img "$xdir/"ami-*/image; do
[ -f "$f" ] && echo "$f" && break; done; true)
- if [[ -z "$IMAGE_NAME" ]]; then
- IMAGE_NAME=$(basename "$IMAGE" ".img")
+ if [[ -z "$image_name" ]]; then
+ image_name=$(basename "$image" ".img")
fi
;;
*.img)
- IMAGE_NAME=$(basename "$IMAGE" ".img")
- format=$(qemu-img info ${IMAGE} | awk '/^file format/ { print $3; exit }')
+ image_name=$(basename "$image" ".img")
+ local format=$(qemu-img info ${image} | awk '/^file format/ { print $3; exit }')
if [[ ",qcow2,raw,vdi,vmdk,vpc," =~ ",$format," ]]; then
- DISK_FORMAT=$format
+ disk_format=$format
else
- DISK_FORMAT=raw
+ disk_format=raw
fi
- CONTAINER_FORMAT=bare
+ container_format=bare
;;
*.img.gz)
- IMAGE_NAME=$(basename "$IMAGE" ".img.gz")
- DISK_FORMAT=raw
- CONTAINER_FORMAT=bare
- UNPACK=zcat
+ image_name=$(basename "$image" ".img.gz")
+ disk_format=raw
+ container_format=bare
+ unpack=zcat
;;
*.qcow2)
- IMAGE_NAME=$(basename "$IMAGE" ".qcow2")
- DISK_FORMAT=qcow2
- CONTAINER_FORMAT=bare
+ image_name=$(basename "$image" ".qcow2")
+ disk_format=qcow2
+ container_format=bare
;;
*.iso)
- IMAGE_NAME=$(basename "$IMAGE" ".iso")
- DISK_FORMAT=iso
- CONTAINER_FORMAT=bare
+ image_name=$(basename "$image" ".iso")
+ disk_format=iso
+ container_format=bare
;;
- *) echo "Do not know what to do with $IMAGE_FNAME"; false;;
+ *) echo "Do not know what to do with $image_fname"; false;;
esac
if is_arch "ppc64"; then
- IMG_PROPERTY="--property hw_cdrom_bus=scsi"
+ img_property="--property hw_cdrom_bus=scsi"
fi
- if [ "$CONTAINER_FORMAT" = "bare" ]; then
- if [ "$UNPACK" = "zcat" ]; then
- glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" $IMG_PROPERTY --is-public True --container-format=$CONTAINER_FORMAT --disk-format $DISK_FORMAT < <(zcat --force "${IMAGE}")
+ if [ "$container_format" = "bare" ]; then
+ if [ "$unpack" = "zcat" ]; then
+ glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name" $img_property --is-public True --container-format=$container_format --disk-format $disk_format < <(zcat --force "${image}")
else
- glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME" $IMG_PROPERTY --is-public True --container-format=$CONTAINER_FORMAT --disk-format $DISK_FORMAT < "${IMAGE}"
+ glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name" $img_property --is-public True --container-format=$container_format --disk-format $disk_format < "${image}"
fi
else
# Use glance client to add the kernel the root filesystem.
# We parse the results of the first upload to get the glance ID of the
# kernel for use when uploading the root filesystem.
- KERNEL_ID=""; RAMDISK_ID="";
- if [ -n "$KERNEL" ]; then
- KERNEL_ID=$(glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME-kernel" $IMG_PROPERTY --is-public True --container-format aki --disk-format aki < "$KERNEL" | grep ' id ' | get_field 2)
+ local kernel_id="" ramdisk_id="";
+ if [ -n "$kernel" ]; then
+ kernel_id=$(glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name-kernel" $img_property --is-public True --container-format aki --disk-format aki < "$kernel" | grep ' id ' | get_field 2)
fi
- if [ -n "$RAMDISK" ]; then
- RAMDISK_ID=$(glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$IMAGE_NAME-ramdisk" $IMG_PROPERTY --is-public True --container-format ari --disk-format ari < "$RAMDISK" | grep ' id ' | get_field 2)
+ if [ -n "$ramdisk" ]; then
+ ramdisk_id=$(glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "$image_name-ramdisk" $img_property --is-public True --container-format ari --disk-format ari < "$ramdisk" | grep ' id ' | get_field 2)
fi
- glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "${IMAGE_NAME%.img}" $IMG_PROPERTY --is-public True --container-format ami --disk-format ami ${KERNEL_ID:+--property kernel_id=$KERNEL_ID} ${RAMDISK_ID:+--property ramdisk_id=$RAMDISK_ID} < "${IMAGE}"
+ glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT image-create --name "${image_name%.img}" $img_property --is-public True --container-format ami --disk-format ami ${kernel_id:+--property kernel_id=$kernel_id} ${ramdisk_id:+--property ramdisk_id=$ramdisk_id} < "${image}"
fi
}
@@ -546,6 +548,40 @@
}
fi
+
+# create_disk - Create backing disk
+function create_disk {
+ local node_number
+ local disk_image=${1}
+ local storage_data_dir=${2}
+ local loopback_disk_size=${3}
+
+ # Create a loopback disk and format it to XFS.
+ if [[ -e ${disk_image} ]]; then
+ if egrep -q ${storage_data_dir} /proc/mounts; then
+ sudo umount ${storage_data_dir}/drives/sdb1
+ sudo rm -f ${disk_image}
+ fi
+ fi
+
+ sudo mkdir -p ${storage_data_dir}/drives/images
+
+ sudo truncate -s ${loopback_disk_size} ${disk_image}
+
+ # Make a fresh XFS filesystem. Use bigger inodes so xattr can fit in
+ # a single inode. Keeping the default inode size (256) will result in multiple
+ # inodes being used to store xattr. Retrieving the xattr will be slower
+ # since we have to read multiple inodes. This statement is true for both
+ # Swift and Ceph.
+ sudo mkfs.xfs -f -i size=1024 ${disk_image}
+
+ # Mount the disk with mount options to make it as efficient as possible
+ if ! egrep -q ${storage_data_dir} /proc/mounts; then
+ sudo mount -t xfs -o loop,noatime,nodiratime,nobarrier,logbufs=8 \
+ ${disk_image} ${storage_data_dir}
+ fi
+}
+
# Restore xtrace
$XTRACE
diff --git a/functions-common b/functions-common
index 613a86c..5284056 100644
--- a/functions-common
+++ b/functions-common
@@ -49,6 +49,7 @@
local section=$2
local option=$3
shift 3
+
local values="$(iniget_multiline $file $section $option) $@"
iniset_multiline $file $section $option $values
$xtrace
@@ -62,6 +63,7 @@
local file=$1
local section=$2
local option=$3
+
sed -i -e "/^\[$section\]/,/^\[.*\]/ s|^\($option[ \t]*=.*$\)|#\1|" "$file"
$xtrace
}
@@ -75,6 +77,7 @@
local section=$2
local option=$3
local line
+
line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
echo ${line#*=}
$xtrace
@@ -89,6 +92,7 @@
local section=$2
local option=$3
local values
+
values=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { s/^$option[ \t]*=[ \t]*//gp; }" "$file")
echo ${values}
$xtrace
@@ -103,6 +107,7 @@
local section=$2
local option=$3
local line
+
line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
$xtrace
[ -n "$line" ]
@@ -145,6 +150,7 @@
local file=$1
local section=$2
local option=$3
+
shift 3
local values
for v in $@; do
@@ -237,28 +243,28 @@
# die_if_not_set $LINENO env-var "message"
function die_if_not_set {
local exitcode=$?
- FXTRACE=$(set +o | grep xtrace)
+ local xtrace=$(set +o | grep xtrace)
set +o xtrace
local line=$1; shift
local evar=$1; shift
if ! is_set $evar || [ $exitcode != 0 ]; then
die $line "$*"
fi
- $FXTRACE
+ $xtrace
}
# Prints line number and "message" in error format
# err $LINENO "message"
function err {
local exitcode=$?
- errXTRACE=$(set +o | grep xtrace)
+ local xtrace=$(set +o | grep xtrace)
set +o xtrace
local msg="[ERROR] ${BASH_SOURCE[2]}:$1 $2"
echo $msg 1>&2;
if [[ -n ${SCREEN_LOGDIR} ]]; then
echo $msg >> "${SCREEN_LOGDIR}/error.log"
fi
- $errXTRACE
+ $xtrace
return $exitcode
}
@@ -268,14 +274,14 @@
# err_if_not_set $LINENO env-var "message"
function err_if_not_set {
local exitcode=$?
- errinsXTRACE=$(set +o | grep xtrace)
+ local xtrace=$(set +o | grep xtrace)
set +o xtrace
local line=$1; shift
local evar=$1; shift
if ! is_set $evar || [ $exitcode != 0 ]; then
err $line "$*"
fi
- $errinsXTRACE
+ $xtrace
return $exitcode
}
@@ -304,14 +310,14 @@
# warn $LINENO "message"
function warn {
local exitcode=$?
- errXTRACE=$(set +o | grep xtrace)
+ local xtrace=$(set +o | grep xtrace)
set +o xtrace
local msg="[WARNING] ${BASH_SOURCE[2]}:$1 $2"
echo $msg 1>&2;
if [[ -n ${SCREEN_LOGDIR} ]]; then
echo $msg >> "${SCREEN_LOGDIR}/error.log"
fi
- $errXTRACE
+ $xtrace
return $exitcode
}
@@ -322,13 +328,16 @@
# Determine OS Vendor, Release and Update
# Tested with OS/X, Ubuntu, RedHat, CentOS, Fedora
# Returns results in global variables:
-# os_VENDOR - vendor name
-# os_RELEASE - release
-# os_UPDATE - update
-# os_PACKAGE - package type
-# os_CODENAME - vendor's codename for release
+# ``os_VENDOR`` - vendor name: ``Ubuntu``, ``Fedora``, etc
+# ``os_RELEASE`` - major release: ``14.04`` (Ubuntu), ``20`` (Fedora)
+# ``os_UPDATE`` - update: ex. the ``5`` in ``RHEL6.5``
+# ``os_PACKAGE`` - package type: ``deb`` or ``rpm``
+# ``os_CODENAME`` - vendor's codename for release: ``snow leopard``, ``trusty``
+declare os_VENDOR os_RELEASE os_UPDATE os_PACKAGE os_CODENAME
+
# GetOSVersion
function GetOSVersion {
+
# Figure out which vendor we are
if [[ -x "`which sw_vers 2>/dev/null`" ]]; then
# OS/X
@@ -418,6 +427,8 @@
# Translate the OS version values into common nomenclature
# Sets global ``DISTRO`` from the ``os_*`` values
+declare DISTRO
+
function GetDistro {
GetOSVersion
if [[ "$os_VENDOR" =~ (Ubuntu) || "$os_VENDOR" =~ (Debian) ]]; then
@@ -435,7 +446,9 @@
else
DISTRO="sle${os_RELEASE}sp${os_UPDATE}"
fi
- elif [[ "$os_VENDOR" =~ (Red Hat) || "$os_VENDOR" =~ (CentOS) ]]; then
+ elif [[ "$os_VENDOR" =~ (Red Hat) || \
+ "$os_VENDOR" =~ (CentOS) || \
+ "$os_VENDOR" =~ (OracleServer) ]]; then
# Drop the . release as we assume it's compatible
DISTRO="rhel${os_RELEASE::1}"
elif [[ "$os_VENDOR" =~ (XenServer) ]]; then
@@ -450,9 +463,7 @@
# Utility function for checking machine architecture
# is_arch arch-type
function is_arch {
- ARCH_TYPE=$1
-
- [[ "$(uname -m)" == "$ARCH_TYPE" ]]
+ [[ "$(uname -m)" == "$1" ]]
}
# Determine if current distribution is a Fedora-based distribution
@@ -463,7 +474,8 @@
GetOSVersion
fi
- [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || [ "$os_VENDOR" = "CentOS" ]
+ [ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
+ [ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "OracleServer" ]
}
@@ -497,6 +509,7 @@
# ``get_release_name_from_branch branch-name``
function get_release_name_from_branch {
local branch=$1
+
if [[ $branch =~ "stable/" ]]; then
echo ${branch#*/}
else
@@ -507,72 +520,73 @@
# git clone only if directory doesn't exist already. Since ``DEST`` might not
# be owned by the installation user, we create the directory and change the
# ownership to the proper user.
-# Set global RECLONE=yes to simulate a clone when dest-dir exists
-# Set global ERROR_ON_CLONE=True to abort execution with an error if the git repo
+# Set global ``RECLONE=yes`` to simulate a clone when dest-dir exists
+# Set global ``ERROR_ON_CLONE=True`` to abort execution with an error if the git repo
# does not exist (default is False, meaning the repo will be cloned).
-# Uses global ``OFFLINE``
+# Uses globals ``ERROR_ON_CLONE``, ``OFFLINE``, ``RECLONE``
# git_clone remote dest-dir branch
function git_clone {
- GIT_REMOTE=$1
- GIT_DEST=$2
- GIT_REF=$3
+ local git_remote=$1
+ local git_dest=$2
+ local git_ref=$3
+ local orig_dir=$(pwd)
+
RECLONE=$(trueorfalse False $RECLONE)
- local orig_dir=`pwd`
if [[ "$OFFLINE" = "True" ]]; then
echo "Running in offline mode, clones already exist"
# print out the results so we know what change was used in the logs
- cd $GIT_DEST
+ cd $git_dest
git show --oneline | head -1
cd $orig_dir
return
fi
- if echo $GIT_REF | egrep -q "^refs"; then
+ if echo $git_ref | egrep -q "^refs"; then
# If our branch name is a gerrit style refs/changes/...
- if [[ ! -d $GIT_DEST ]]; then
+ if [[ ! -d $git_dest ]]; then
[[ "$ERROR_ON_CLONE" = "True" ]] && \
die $LINENO "Cloning not allowed in this configuration"
- git_timed clone $GIT_REMOTE $GIT_DEST
+ git_timed clone $git_remote $git_dest
fi
- cd $GIT_DEST
- git_timed fetch $GIT_REMOTE $GIT_REF && git checkout FETCH_HEAD
+ cd $git_dest
+ git_timed fetch $git_remote $git_ref && git checkout FETCH_HEAD
else
# do a full clone only if the directory doesn't exist
- if [[ ! -d $GIT_DEST ]]; then
+ if [[ ! -d $git_dest ]]; then
[[ "$ERROR_ON_CLONE" = "True" ]] && \
die $LINENO "Cloning not allowed in this configuration"
- git_timed clone $GIT_REMOTE $GIT_DEST
- cd $GIT_DEST
+ git_timed clone $git_remote $git_dest
+ cd $git_dest
# This checkout syntax works for both branches and tags
- git checkout $GIT_REF
+ git checkout $git_ref
elif [[ "$RECLONE" = "True" ]]; then
# if it does exist then simulate what clone does if asked to RECLONE
- cd $GIT_DEST
+ cd $git_dest
# set the url to pull from and fetch
- git remote set-url origin $GIT_REMOTE
+ git remote set-url origin $git_remote
git_timed fetch origin
# remove the existing ignored files (like pyc) as they cause breakage
# (due to the py files having older timestamps than our pyc, so python
# thinks the pyc files are correct using them)
- find $GIT_DEST -name '*.pyc' -delete
+ find $git_dest -name '*.pyc' -delete
- # handle GIT_REF accordingly to type (tag, branch)
- if [[ -n "`git show-ref refs/tags/$GIT_REF`" ]]; then
- git_update_tag $GIT_REF
- elif [[ -n "`git show-ref refs/heads/$GIT_REF`" ]]; then
- git_update_branch $GIT_REF
- elif [[ -n "`git show-ref refs/remotes/origin/$GIT_REF`" ]]; then
- git_update_remote_branch $GIT_REF
+ # handle git_ref accordingly to type (tag, branch)
+ if [[ -n "`git show-ref refs/tags/$git_ref`" ]]; then
+ git_update_tag $git_ref
+ elif [[ -n "`git show-ref refs/heads/$git_ref`" ]]; then
+ git_update_branch $git_ref
+ elif [[ -n "`git show-ref refs/remotes/origin/$git_ref`" ]]; then
+ git_update_remote_branch $git_ref
else
- die $LINENO "$GIT_REF is neither branch nor tag"
+ die $LINENO "$git_ref is neither branch nor tag"
fi
fi
fi
# print out the results so we know what change was used in the logs
- cd $GIT_DEST
+ cd $git_dest
git show --oneline | head -1
cd $orig_dir
}
@@ -611,35 +625,32 @@
# git update using reference as a branch.
# git_update_branch ref
function git_update_branch {
+ local git_branch=$1
- GIT_BRANCH=$1
-
- git checkout -f origin/$GIT_BRANCH
+ git checkout -f origin/$git_branch
# a local branch might not exist
- git branch -D $GIT_BRANCH || true
- git checkout -b $GIT_BRANCH
+ git branch -D $git_branch || true
+ git checkout -b $git_branch
}
# git update using reference as a branch.
# git_update_remote_branch ref
function git_update_remote_branch {
+ local git_branch=$1
- GIT_BRANCH=$1
-
- git checkout -b $GIT_BRANCH -t origin/$GIT_BRANCH
+ git checkout -b $git_branch -t origin/$git_branch
}
# git update using reference as a tag. Be careful editing source at that repo
# as working copy will be in a detached mode
# git_update_tag ref
function git_update_tag {
+ local git_tag=$1
- GIT_TAG=$1
-
- git tag -d $GIT_TAG
+ git tag -d $git_tag
# fetching given tag only
- git_timed fetch origin tag $GIT_TAG
- git checkout -f $GIT_TAG
+ git_timed fetch origin tag $git_tag
+ git checkout -f $git_tag
}
@@ -659,16 +670,17 @@
# Search for an IP unless an explicit is set by ``HOST_IP`` environment variable
if [ -z "$host_ip" -o "$host_ip" == "dhcp" ]; then
host_ip=""
- host_ips=`LC_ALL=C ip -f inet addr show ${host_ip_iface} | awk '/inet/ {split($2,parts,"/"); print parts[1]}'`
- for IP in $host_ips; do
+ local host_ips=$(LC_ALL=C ip -f inet addr show ${host_ip_iface} | awk '/inet/ {split($2,parts,"/"); print parts[1]}')
+ local ip
+ for ip in $host_ips; do
# Attempt to filter out IP addresses that are part of the fixed and
# floating range. Note that this method only works if the ``netaddr``
# python library is installed. If it is not installed, an error
# will be printed and the first IP from the interface will be used.
# If that is not correct set ``HOST_IP`` in ``localrc`` to the correct
# address.
- if ! (address_in_net $IP $fixed_range || address_in_net $IP $floating_range); then
- host_ip=$IP
+ if ! (address_in_net $ip $fixed_range || address_in_net $ip $floating_range); then
+ host_ip=$ip
break;
fi
done
@@ -681,6 +693,7 @@
# Reverse syntax is supported: -1 is the last field, -2 is second to last, etc.
# get_field field-number
function get_field {
+ local data field
while read data; do
if [ "$1" -lt 0 ]; then
field="(\$(NF$1))"
@@ -719,6 +732,115 @@
mv ${tmpfile} ${policy_file}
}
+# Gets or creates user
+# Usage: get_or_create_user <username> <password> <project> [<email>]
+function get_or_create_user {
+ if [[ ! -z "$4" ]]; then
+ local email="--email=$4"
+ else
+ local email=""
+ fi
+ # Gets user id
+ local user_id=$(
+ # Gets user id
+ openstack user show $1 -f value -c id 2>/dev/null ||
+ # Creates new user
+ openstack user create \
+ $1 \
+ --password "$2" \
+ --project $3 \
+ $email \
+ -f value -c id
+ )
+ echo $user_id
+}
+
+# Gets or creates project
+# Usage: get_or_create_project <name>
+function get_or_create_project {
+ # Gets project id
+ local project_id=$(
+ # Gets project id
+ openstack project show $1 -f value -c id 2>/dev/null ||
+ # Creates new project if not exists
+ openstack project create $1 -f value -c id
+ )
+ echo $project_id
+}
+
+# Gets or creates role
+# Usage: get_or_create_role <name>
+function get_or_create_role {
+ local role_id=$(
+ # Gets role id
+ openstack role show $1 -f value -c id 2>/dev/null ||
+ # Creates role if not exists
+ openstack role create $1 -f value -c id
+ )
+ echo $role_id
+}
+
+# Gets or adds user role
+# Usage: get_or_add_user_role <role> <user> <project>
+function get_or_add_user_role {
+ # Gets user role id
+ local user_role_id=$(openstack user role list \
+ $2 \
+ --project $3 \
+ --column "ID" \
+ --column "Name" \
+ | grep " $1 " | get_field 1)
+ if [[ -z "$user_role_id" ]]; then
+ # Adds role to user
+ user_role_id=$(openstack role add \
+ $1 \
+ --user $2 \
+ --project $3 \
+ | grep " id " | get_field 2)
+ fi
+ echo $user_role_id
+}
+
+# Gets or creates service
+# Usage: get_or_create_service <name> <type> <description>
+function get_or_create_service {
+ # Gets service id
+ local service_id=$(
+ # Gets service id
+ openstack service show $1 -f value -c id 2>/dev/null ||
+ # Creates new service if not exists
+ openstack service create \
+ $1 \
+ --type=$2 \
+ --description="$3" \
+ -f value -c id
+ )
+ echo $service_id
+}
+
+# Gets or creates endpoint
+# Usage: get_or_create_endpoint <service> <region> <publicurl> <adminurl> <internalurl>
+function get_or_create_endpoint {
+ # Gets endpoint id
+ local endpoint_id=$(openstack endpoint list \
+ --column "ID" \
+ --column "Region" \
+ --column "Service Name" \
+ | grep " $2 " \
+ | grep " $1 " | get_field 1)
+ if [[ -z "$endpoint_id" ]]; then
+ # Creates new endpoint
+ endpoint_id=$(openstack endpoint create \
+ $1 \
+ --region $2 \
+ --publicurl $3 \
+ --adminurl $4 \
+ --internalurl $5 \
+ | grep " id " | get_field 2)
+ fi
+ echo $endpoint_id
+}
+
# Package Functions
# =================
@@ -880,9 +1002,10 @@
}
# Distro-agnostic package installer
+# Uses globals ``NO_UPDATE_REPOS``, ``REPOS_UPDATED``, ``RETRY_UPDATE``
# install_package package [package ...]
function update_package_repo {
- if [[ "NO_UPDATE_REPOS" = "True" ]]; then
+ if [[ "$NO_UPDATE_REPOS" = "True" ]]; then
return 0
fi
@@ -980,6 +1103,7 @@
}
# zypper wrapper to set arguments correctly
+# Uses globals ``OFFLINE``, ``*_proxy``
# zypper_install package [package ...]
function zypper_install {
[[ "$OFFLINE" = "True" ]] && return
@@ -996,7 +1120,8 @@
# _run_process() is designed to be backgrounded by run_process() to simulate a
# fork. It includes the dirty work of closing extra filehandles and preparing log
# files to produce the same logs as screen_it(). The log filename is derived
-# from the service name and global-and-now-misnamed SCREEN_LOGDIR
+# from the service name and global-and-now-misnamed ``SCREEN_LOGDIR``
+# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_LOGDIR``
# _run_process service "command-line"
function _run_process {
local service=$1
@@ -1022,6 +1147,7 @@
# Helper to remove the ``*.failure`` files under ``$SERVICE_DIR/$SCREEN_NAME``.
# This is used for ``service_check`` when all the ``screen_it`` are called finished
+# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
# init_service_check
function init_service_check {
SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1039,15 +1165,15 @@
function is_running {
local name=$1
ps auxw | grep -v grep | grep ${name} > /dev/null
- RC=$?
+ local exitcode=$?
# some times I really hate bash reverse binary logic
- return $RC
+ return $exitcode
}
# run_process() launches a child process that closes all file descriptors and
# then exec's the passed in command. This is meant to duplicate the semantics
# of screen_it() without screen. PIDs are written to
-# $SERVICE_DIR/$SCREEN_NAME/$service.pid
+# ``$SERVICE_DIR/$SCREEN_NAME/$service.pid``
# run_process service "command-line"
function run_process {
local service=$1
@@ -1059,6 +1185,8 @@
}
# Helper to launch a service in a named screen
+# Uses globals ``CURRENT_LOG_TIME``, ``SCREEN_NAME``, ``SCREEN_LOGDIR``,
+# ``SERVICE_DIR``, ``USE_SCREEN``
# screen_it service "command-line"
function screen_it {
SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1101,6 +1229,7 @@
}
# Screen rc file builder
+# Uses globals ``SCREEN_NAME``, ``SCREENRC``
# screen_rc service "command-line"
function screen_rc {
SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1131,6 +1260,7 @@
# If a PID is available use it, kill the whole process group via TERM
# If screen is being used kill the screen window; this will catch processes
# that did not leave a PID behind
+# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``, ``USE_SCREEN``
# screen_stop service
function screen_stop {
SCREEN_NAME=${SCREEN_NAME:-stack}
@@ -1151,6 +1281,7 @@
}
# Helper to get the status of each running service
+# Uses globals ``SCREEN_NAME``, ``SERVICE_DIR``
# service_check
function service_check {
local service
@@ -1218,20 +1349,24 @@
if [[ -z "$os_PACKAGE" ]]; then
GetOSVersion
fi
- if [[ $TRACK_DEPENDS = True ]]; then
+ if [[ $TRACK_DEPENDS = True && ! "$@" =~ virtualenv ]]; then
+ # TRACK_DEPENDS=True installation creates a circular dependency when
+ # we attempt to install virtualenv into a virualenv, so we must global
+ # that installation.
source $DEST/.venv/bin/activate
- CMD_PIP=$DEST/.venv/bin/pip
- SUDO_PIP="env"
+ local cmd_pip=$DEST/.venv/bin/pip
+ local sudo_pip="env"
else
- SUDO_PIP="sudo"
- CMD_PIP=$(get_pip_command)
+ local cmd_pip=$(get_pip_command)
+ local sudo_pip="sudo"
fi
# Mirror option not needed anymore because pypi has CDN available,
# but it's useful in certain circumstances
PIP_USE_MIRRORS=${PIP_USE_MIRRORS:-False}
+ local pip_mirror_opt=""
if [[ "$PIP_USE_MIRRORS" != "False" ]]; then
- PIP_MIRROR_OPT="--use-mirrors"
+ pip_mirror_opt="--use-mirrors"
fi
# pip < 1.4 has a bug where it will use an already existing build
@@ -1244,13 +1379,13 @@
local pip_build_tmp=$(mktemp --tmpdir -d pip-build.XXXXX)
$xtrace
- $SUDO_PIP PIP_DOWNLOAD_CACHE=${PIP_DOWNLOAD_CACHE:-/var/cache/pip} \
+ $sudo_pip PIP_DOWNLOAD_CACHE=${PIP_DOWNLOAD_CACHE:-/var/cache/pip} \
http_proxy=$http_proxy \
https_proxy=$https_proxy \
no_proxy=$no_proxy \
- $CMD_PIP install --build=${pip_build_tmp} \
- $PIP_MIRROR_OPT $@ \
- && $SUDO_PIP rm -rf ${pip_build_tmp}
+ $cmd_pip install --build=${pip_build_tmp} \
+ $pip_mirror_opt $@ \
+ && $sudo_pip rm -rf ${pip_build_tmp}
}
# this should be used if you want to install globally, all libraries should
@@ -1285,7 +1420,7 @@
if [[ $update_requirements != "changed" ]]; then
(cd $REQUIREMENTS_DIR; \
- $SUDO_CMD python update.py $project_dir)
+ python update.py $project_dir)
fi
setup_package $project_dir $flags
@@ -1392,6 +1527,7 @@
# enable_service service [service ...]
function enable_service {
local tmpsvcs="${ENABLED_SERVICES}"
+ local service
for service in $@; do
if ! is_service_enabled $service; then
tmpsvcs+=",$service"
@@ -1428,7 +1564,8 @@
local xtrace=$(set +o | grep xtrace)
set +o xtrace
local enabled=1
- services=$@
+ local services=$@
+ local service
for service in ${services}; do
[[ ,${ENABLED_SERVICES}, =~ ,${service}, ]] && enabled=0
@@ -1464,8 +1601,9 @@
function use_exclusive_service {
local options=${!1}
local selection=$3
- out=$2
+ local out=$2
[ -z $selection ] || [[ ! "$options" =~ "$selection" ]] && return 1
+ local opt
for opt in $options;do
[[ "$opt" = "$selection" ]] && enable_service $opt || disable_service $opt
done
@@ -1489,7 +1627,7 @@
let last="${#args[*]} - 1"
- dir_to_check=${args[$last]}
+ local dir_to_check=${args[$last]}
if [ ! -d "$dir_to_check" ]; then
dir_to_check=`dirname "$dir_to_check"`
fi
diff --git a/lib/apache b/lib/apache
index f7255be..f4f82a1 100644
--- a/lib/apache
+++ b/lib/apache
@@ -8,7 +8,6 @@
#
# lib/apache exports the following functions:
#
-# - is_apache_enabled_service
# - install_apache_wsgi
# - config_apache_wsgi
# - apache_site_config_for
@@ -42,23 +41,6 @@
# Functions
# ---------
-
-# is_apache_enabled_service() checks if the service(s) specified as arguments are
-# apache enabled by the user in ``APACHE_ENABLED_SERVICES`` as web front end.
-#
-# Multiple services specified as arguments are ``OR``'ed together; the test
-# is a short-circuit boolean, i.e it returns on the first match.
-#
-# Uses global ``APACHE_ENABLED_SERVICES``
-# APACHE_ENABLED_SERVICES service [service ...]
-function is_apache_enabled_service {
- services=$@
- for service in ${services}; do
- [[ ,${APACHE_ENABLED_SERVICES}, =~ ,${service}, ]] && return 0
- done
- return 1
-}
-
# install_apache_wsgi() - Install Apache server and wsgi module
function install_apache_wsgi {
# Apache installation, because we mark it NOPRIME
@@ -168,7 +150,12 @@
# restart_apache_server
function restart_apache_server {
- restart_service $APACHE_NAME
+ # Apache can be slow to stop, doing an explicit stop, sleep, start helps
+ # to mitigate issues where apache will claim a port it's listening on is
+ # still in use and fail to start.
+ stop_service $APACHE_NAME
+ sleep 3
+ start_service $APACHE_NAME
}
# Restore xtrace
diff --git a/lib/ceilometer b/lib/ceilometer
index 8ce9fb4..54d95c5 100644
--- a/lib/ceilometer
+++ b/lib/ceilometer
@@ -84,35 +84,22 @@
# Ceilometer
if [[ "$ENABLED_SERVICES" =~ "ceilometer-api" ]]; then
- CEILOMETER_USER=$(openstack user create \
- ceilometer \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email ceilometer@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $CEILOMETER_USER
+ CEILOMETER_USER=$(get_or_create_user "ceilometer" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $CEILOMETER_USER $SERVICE_TENANT
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- CEILOMETER_SERVICE=$(openstack service create \
- ceilometer \
- --type=metering \
- --description="OpenStack Telemetry Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $CEILOMETER_SERVICE \
- --region RegionOne \
- --publicurl "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/" \
- --adminurl "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/" \
- --internalurl "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/"
+ CEILOMETER_SERVICE=$(get_or_create_service "ceilometer" \
+ "metering" "OpenStack Telemetry Service")
+ get_or_create_endpoint $CEILOMETER_SERVICE \
+ "$REGION_NAME" \
+ "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/" \
+ "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/" \
+ "$CEILOMETER_SERVICE_PROTOCOL://$CEILOMETER_SERVICE_HOST:$CEILOMETER_SERVICE_PORT/"
fi
if is_service_enabled swift; then
# Ceilometer needs ResellerAdmin role to access swift account stats.
- openstack role add \
- --project $SERVICE_TENANT_NAME \
- --user ceilometer \
- ResellerAdmin
+ get_or_add_user_role "ResellerAdmin" "ceilometer" $SERVICE_TENANT_NAME
fi
fi
}
@@ -126,16 +113,8 @@
fi
}
-# configure_ceilometerclient() - Set config files, create data dirs, etc
-function configure_ceilometerclient {
- setup_develop $CEILOMETERCLIENT_DIR
- sudo install -D -m 0644 -o $STACK_USER {$CEILOMETERCLIENT_DIR/tools/,/etc/bash_completion.d/}ceilometer.bash_completion
-}
-
# configure_ceilometer() - Set config files, create data dirs, etc
function configure_ceilometer {
- setup_develop $CEILOMETER_DIR
-
[ ! -d $CEILOMETER_CONF_DIR ] && sudo mkdir -m 755 -p $CEILOMETER_CONF_DIR
sudo chown $STACK_USER $CEILOMETER_CONF_DIR
@@ -166,7 +145,6 @@
iniset $CEILOMETER_CONF service_credentials os_username ceilometer
iniset $CEILOMETER_CONF service_credentials os_password $SERVICE_PASSWORD
iniset $CEILOMETER_CONF service_credentials os_tenant_name $SERVICE_TENANT_NAME
- iniset $CEILOMETER_CONF service_credentials os_auth_url $OS_AUTH_URL
iniset $CEILOMETER_CONF keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
iniset $CEILOMETER_CONF keystone_authtoken admin_user ceilometer
@@ -232,11 +210,15 @@
# install_ceilometer() - Collect source and prepare
function install_ceilometer {
git_clone $CEILOMETER_REPO $CEILOMETER_DIR $CEILOMETER_BRANCH
+ setup_develop $CEILOMETER_DIR
+
}
# install_ceilometerclient() - Collect source and prepare
function install_ceilometerclient {
git_clone $CEILOMETERCLIENT_REPO $CEILOMETERCLIENT_DIR $CEILOMETERCLIENT_BRANCH
+ setup_develop $CEILOMETERCLIENT_DIR
+ sudo install -D -m 0644 -o $STACK_USER {$CEILOMETERCLIENT_DIR/tools/,/etc/bash_completion.d/}ceilometer.bash_completion
}
# start_ceilometer() - Start running processes, including screen
diff --git a/lib/ceph b/lib/ceph
new file mode 100644
index 0000000..32a4760
--- /dev/null
+++ b/lib/ceph
@@ -0,0 +1,286 @@
+# lib/ceph
+# Functions to control the configuration and operation of the **Ceph** storage service
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``CEPH_DATA_DIR`` or ``DATA_DIR`` must be defined
+
+# ``stack.sh`` calls the entry points in this order (via ``extras.d/60-ceph.sh``):
+#
+# - install_ceph
+# - configure_ceph
+# - init_ceph
+# - start_ceph
+# - stop_ceph
+# - cleanup_ceph
+
+# Save trace setting
+XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+# Set ``CEPH_DATA_DIR`` to the location of Ceph drives and objects.
+# Default is the common DevStack data directory.
+CEPH_DATA_DIR=${CEPH_DATA_DIR:-/var/lib/ceph}
+CEPH_DISK_IMAGE=${CEPH_DATA_DIR}/drives/images/ceph.img
+
+# Set ``CEPH_CONF_DIR`` to the location of the configuration files.
+# Default is ``/etc/ceph``.
+CEPH_CONF_DIR=${CEPH_CONF_DIR:-/etc/ceph}
+
+# DevStack will create a loop-back disk formatted as XFS to store the
+# Ceph data. Set ``CEPH_LOOPBACK_DISK_SIZE`` to the disk size in
+# kilobytes.
+# Default is 1 gigabyte.
+CEPH_LOOPBACK_DISK_SIZE_DEFAULT=2G
+CEPH_LOOPBACK_DISK_SIZE=${CEPH_LOOPBACK_DISK_SIZE:-$CEPH_LOOPBACK_DISK_SIZE_DEFAULT}
+
+# Common
+CEPH_FSID=$(uuidgen)
+CEPH_CONF_FILE=${CEPH_CONF_DIR}/ceph.conf
+
+# Glance
+GLANCE_CEPH_USER=${GLANCE_CEPH_USER:-glance}
+GLANCE_CEPH_POOL=${GLANCE_CEPH_POOL:-images}
+GLANCE_CEPH_POOL_PG=${GLANCE_CEPH_POOL_PG:-8}
+GLANCE_CEPH_POOL_PGP=${GLANCE_CEPH_POOL_PGP:-8}
+
+# Nova
+NOVA_CEPH_POOL=${NOVA_CEPH_POOL:-vms}
+NOVA_CEPH_POOL_PG=${NOVA_CEPH_POOL_PG:-8}
+NOVA_CEPH_POOL_PGP=${NOVA_CEPH_POOL_PGP:-8}
+
+# Cinder
+CINDER_CEPH_POOL=${CINDER_CEPH_POOL:-volumes}
+CINDER_CEPH_POOL_PG=${CINDER_CEPH_POOL_PG:-8}
+CINDER_CEPH_POOL_PGP=${CINDER_CEPH_POOL_PGP:-8}
+CINDER_CEPH_USER=${CINDER_CEPH_USER:-cinder}
+CINDER_CEPH_UUID=${CINDER_CEPH_UUID:-$(uuidgen)}
+
+# Set ``CEPH_REPLICAS`` to configure how many replicas are to be
+# configured for your Ceph cluster. By default we are configuring
+# only one replica since this is way less CPU and memory intensive. If
+# you are planning to test Ceph replication feel free to increase this value
+CEPH_REPLICAS=${CEPH_REPLICAS:-1}
+CEPH_REPLICAS_SEQ=$(seq ${CEPH_REPLICAS})
+
+# Functions
+# ------------
+
+# import_libvirt_secret_ceph() - Imports Cinder user key into libvirt
+# so it can connect to the Ceph cluster while attaching a Cinder block device
+function import_libvirt_secret_ceph {
+ cat > secret.xml <<EOF
+<secret ephemeral='no' private='no'>
+ <uuid>${CINDER_CEPH_UUID}</uuid>
+ <usage type='ceph'>
+ <name>client.${CINDER_CEPH_USER} secret</name>
+ </usage>
+</secret>
+EOF
+ sudo virsh secret-define --file secret.xml
+ sudo virsh secret-set-value --secret ${CINDER_CEPH_UUID} --base64 $(sudo ceph -c ${CEPH_CONF_FILE} auth get-key client.${CINDER_CEPH_USER})
+ sudo rm -f secret.xml
+}
+
+# cleanup_ceph() - Remove residual data files, anything left over from previous
+# runs that a clean run would need to clean up
+function cleanup_ceph {
+ sudo pkill -f ceph-mon
+ sudo pkill -f ceph-osd
+ sudo rm -rf ${CEPH_DATA_DIR}/*/*
+ sudo rm -rf ${CEPH_CONF_DIR}/*
+ if egrep -q ${CEPH_DATA_DIR} /proc/mounts; then
+ sudo umount ${CEPH_DATA_DIR}
+ fi
+ if [[ -e ${CEPH_DISK_IMAGE} ]]; then
+ sudo rm -f ${CEPH_DISK_IMAGE}
+ fi
+ uninstall_package ceph ceph-common python-ceph libcephfs1 > /dev/null 2>&1
+ VIRSH_UUID=$(sudo virsh secret-list | awk '/^ ?[0-9a-z]/ { print $1 }')
+ sudo virsh secret-undefine ${VIRSH_UUID} >/dev/null 2>&1
+}
+
+# configure_ceph() - Set config files, create data dirs, etc
+function configure_ceph {
+ local count=0
+
+ # create a backing file disk
+ create_disk ${CEPH_DISK_IMAGE} ${CEPH_DATA_DIR} ${CEPH_LOOPBACK_DISK_SIZE}
+
+ # populate ceph directory
+ sudo mkdir -p ${CEPH_DATA_DIR}/{bootstrap-mds,bootstrap-osd,mds,mon,osd,tmp}
+
+ # create ceph monitor initial key and directory
+ sudo ceph-authtool /var/lib/ceph/tmp/keyring.mon.$(hostname) --create-keyring --name=mon. --add-key=$(ceph-authtool --gen-print-key) --cap mon 'allow *'
+ sudo mkdir /var/lib/ceph/mon/ceph-$(hostname)
+
+ # create a default ceph configuration file
+ sudo tee -a ${CEPH_CONF_FILE} > /dev/null <<EOF
+[global]
+fsid = ${CEPH_FSID}
+mon_initial_members = $(hostname)
+mon_host = ${SERVICE_HOST}
+auth_cluster_required = cephx
+auth_service_required = cephx
+auth_client_required = cephx
+filestore_xattr_use_omap = true
+osd crush chooseleaf type = 0
+osd journal size = 100
+EOF
+
+ # bootstrap the ceph monitor
+ sudo ceph-mon -c ${CEPH_CONF_FILE} --mkfs -i $(hostname) --keyring /var/lib/ceph/tmp/keyring.mon.$(hostname)
+ if is_ubuntu; then
+ sudo touch /var/lib/ceph/mon/ceph-$(hostname)/upstart
+ sudo initctl emit ceph-mon id=$(hostname)
+ else
+ sudo touch /var/lib/ceph/mon/ceph-$(hostname)/sysvinit
+ sudo service ceph start mon.$(hostname)
+ fi
+
+ # wait for the admin key to come up otherwise we will not be able to do the actions below
+ until [ -f ${CEPH_CONF_DIR}/ceph.client.admin.keyring ]; do
+ echo_summary "Waiting for the Ceph admin key to be ready..."
+
+ count=$(($count + 1))
+ if [ $count -eq 3 ]; then
+ die $LINENO "Maximum of 3 retries reached"
+ fi
+ sleep 5
+ done
+
+ # change pool replica size according to the CEPH_REPLICAS set by the user
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set data size ${CEPH_REPLICAS}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set rbd size ${CEPH_REPLICAS}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set metadata size ${CEPH_REPLICAS}
+
+ # create a simple rule to take OSDs instead of host with CRUSH
+ # then apply this rules to the default pool
+ if [[ $CEPH_REPLICAS -ne 1 ]]; then
+ sudo ceph -c ${CEPH_CONF_FILE} osd crush rule create-simple devstack default osd
+ RULE_ID=$(sudo ceph -c ${CEPH_CONF_FILE} osd crush rule dump devstack | awk '/rule_id/ {print $3}' | cut -d ',' -f1)
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set rbd crush_ruleset ${RULE_ID}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set data crush_ruleset ${RULE_ID}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set metadata crush_ruleset ${RULE_ID}
+ fi
+
+ # create the OSD(s)
+ for rep in ${CEPH_REPLICAS_SEQ}; do
+ OSD_ID=$(sudo ceph -c ${CEPH_CONF_FILE} osd create)
+ sudo mkdir -p ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}
+ sudo ceph-osd -c ${CEPH_CONF_FILE} -i ${OSD_ID} --mkfs
+ sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create osd.${OSD_ID} mon 'allow profile osd ' osd 'allow *' | sudo tee ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/keyring
+
+ # ceph's init script is parsing ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/ and looking for a file
+ # 'upstart' or 'sysinitv', thanks to these 'touches' we are able to control OSDs daemons
+ # from the init script.
+ if is_ubuntu; then
+ sudo touch ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/upstart
+ else
+ sudo touch ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/sysvinit
+ fi
+ done
+}
+
+# configure_ceph_glance() - Glance config needs to come after Glance is set up
+function configure_ceph_glance {
+ # configure Glance service options, ceph pool, ceph user and ceph key
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${GLANCE_CEPH_POOL} ${GLANCE_CEPH_POOL_PG} ${GLANCE_CEPH_POOL_PGP}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${GLANCE_CEPH_POOL} size ${CEPH_REPLICAS}
+ if [[ $CEPH_REPLICAS -ne 1 ]]; then
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${GLANCE_CEPH_POOL} crush_ruleset ${RULE_ID}
+ fi
+ sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${GLANCE_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
+ sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${GLANCE_CEPH_USER}.keyring
+ iniset $GLANCE_API_CONF DEFAULT default_store rbd
+ iniset $GLANCE_API_CONF DEFAULT rbd_store_ceph_conf $CEPH_CONF_FILE
+ iniset $GLANCE_API_CONF DEFAULT rbd_store_user $GLANCE_CEPH_USER
+ iniset $GLANCE_API_CONF DEFAULT rbd_store_pool $GLANCE_CEPH_POOL
+ iniset $GLANCE_API_CONF DEFAULT show_image_direct_url True
+}
+
+# configure_ceph_nova() - Nova config needs to come after Nova is set up
+function configure_ceph_nova {
+ # configure Nova service options, ceph pool, ceph user and ceph key
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${NOVA_CEPH_POOL} ${NOVA_CEPH_POOL_PG} ${NOVA_CEPH_POOL_PGP}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${NOVA_CEPH_POOL} size ${CEPH_REPLICAS}
+ if [[ $CEPH_REPLICAS -ne 1 ]]; then
+ sudo -c ${CEPH_CONF_FILE} ceph osd pool set ${NOVA_CEPH_POOL} crush_ruleset ${RULE_ID}
+ fi
+ iniset $NOVA_CONF libvirt rbd_user ${CINDER_CEPH_USER}
+ iniset $NOVA_CONF libvirt rbd_secret_uuid ${CINDER_CEPH_UUID}
+ iniset $NOVA_CONF libvirt inject_key false
+ iniset $NOVA_CONF libvirt inject_partition -2
+ iniset $NOVA_CONF libvirt disk_cachemodes "network=writeback"
+ iniset $NOVA_CONF libvirt images_type rbd
+ iniset $NOVA_CONF libvirt images_rbd_pool ${NOVA_CEPH_POOL}
+ iniset $NOVA_CONF libvirt images_rbd_ceph_conf ${CEPH_CONF_FILE}
+}
+
+# configure_ceph_cinder() - Cinder config needs to come after Cinder is set up
+function configure_ceph_cinder {
+ # Configure Cinder service options, ceph pool, ceph user and ceph key
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${CINDER_CEPH_POOL} ${CINDER_CEPH_POOL_PG} ${CINDER_CEPH_POOL_PGP}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_CEPH_POOL} size ${CEPH_REPLICAS}
+ if [[ $CEPH_REPLICAS -ne 1 ]]; then
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_CEPH_POOL} crush_ruleset ${RULE_ID}
+
+ fi
+ sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_CEPH_POOL}, allow rwx pool=${NOVA_CEPH_POOL},allow rx pool=${GLANCE_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
+ sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_CEPH_USER}.keyring
+}
+
+# init_ceph() - Initialize databases, etc.
+function init_ceph {
+ # clean up from previous (possibly aborted) runs
+ # make sure to kill all ceph processes first
+ sudo pkill -f ceph-mon || true
+ sudo pkill -f ceph-osd || true
+}
+
+# install_ceph() - Collect source and prepare
+function install_ceph {
+ # NOTE(dtroyer): At some point it'll be easier to test for unsupported distros,
+ # leveraging the list in stack.sh
+ if [[ ${os_CODENAME} =~ trusty ]] || [[ ${os_CODENAME} =~ Schrödinger’sCat ]] || [[ ${os_CODENAME} =~ Heisenbug ]]; then
+ NO_UPDATE_REPOS=False
+ install_package ceph
+ else
+ exit_distro_not_supported "Ceph since your distro doesn't provide (at least) the Firefly release. Please use Ubuntu Trusty or Fedora 19/20"
+ fi
+}
+
+# start_ceph() - Start running processes, including screen
+function start_ceph {
+ if is_ubuntu; then
+ sudo initctl emit ceph-mon id=$(hostname)
+ for id in $(sudo ceph -c ${CEPH_CONF_FILE} osd ls); do
+ sudo start ceph-osd id=${id}
+ done
+ else
+ sudo service ceph start
+ fi
+}
+
+# stop_ceph() - Stop running processes (non-screen)
+function stop_ceph {
+ if is_ubuntu; then
+ sudo service ceph-mon-all stop > /dev/null 2>&1
+ sudo service ceph-osd-all stop > /dev/null 2>&1
+ else
+ sudo service ceph stop > /dev/null 2>&1
+ fi
+}
+
+
+# Restore xtrace
+$XTRACE
+
+## Local variables:
+## mode: shell-script
+## End:
diff --git a/lib/cinder b/lib/cinder
index 6f2d7c6..38ce4d6 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -28,6 +28,7 @@
# set up default driver
CINDER_DRIVER=${CINDER_DRIVER:-default}
CINDER_PLUGINS=$TOP_DIR/lib/cinder_plugins
+CINDER_BACKENDS=$TOP_DIR/lib/cinder_backends
# grab plugin config if specified via cinder_driver
if [[ -r $CINDER_PLUGINS/$CINDER_DRIVER ]]; then
@@ -57,9 +58,24 @@
CINDER_BIN_DIR=$(get_python_exec_prefix)
fi
+
+# Maintain this here for backward-compatibility with the old configuration
+# DEPRECATED: Use CINDER_ENABLED_BACKENDS instead
# Support for multi lvm backend configuration (default is no support)
CINDER_MULTI_LVM_BACKEND=$(trueorfalse False $CINDER_MULTI_LVM_BACKEND)
+# Default backends
+# The backend format is type:name where type is one of the supported backend
+# types (lvm, nfs, etc) and name is the identifier used in the Cinder
+# configuration and for the volume type name. Multiple backends are
+# comma-separated.
+if [[ $CINDER_MULTI_LVM_BACKEND == "False" ]]; then
+ CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-lvm:lvmdriver-1}
+else
+ CINDER_ENABLED_BACKENDS=${CINDER_ENABLED_BACKENDS:-lvm:lvmdriver-1,lvm:lvmdriver-2}
+fi
+
+
# Should cinder perform secure deletion of volumes?
# Defaults to true, can be set to False to avoid this bug when testing:
# https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1023755
@@ -73,22 +89,22 @@
# https://bugs.launchpad.net/cinder/+bug/1180976
CINDER_PERIODIC_INTERVAL=${CINDER_PERIODIC_INTERVAL:-60}
-# Name of the lvm volume groups to use/create for iscsi volumes
-VOLUME_GROUP=${VOLUME_GROUP:-stack-volumes}
-VOLUME_BACKING_FILE=${VOLUME_BACKING_FILE:-$DATA_DIR/${VOLUME_GROUP}-backing-file}
-VOLUME_BACKING_DEVICE=${VOLUME_BACKING_DEVICE:-}
-
-# VOLUME_GROUP2 is used only if CINDER_MULTI_LVM_BACKEND = True
-VOLUME_GROUP2=${VOLUME_GROUP2:-stack-volumes2}
-VOLUME_BACKING_FILE2=${VOLUME_BACKING_FILE2:-$DATA_DIR/${VOLUME_GROUP2}-backing-file}
-VOLUME_BACKING_DEVICE2=${VOLUME_BACKING_DEVICE2:-}
-
-VOLUME_NAME_PREFIX=${VOLUME_NAME_PREFIX:-volume-}
-
# Tell Tempest this project is present
TEMPEST_SERVICES+=,cinder
+# Source the enabled backends
+if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
+ for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
+ BE_TYPE=${be%%:*}
+ BE_NAME=${be##*:}
+ if [[ -r $CINDER_BACKENDS/${BE_TYPE} ]]; then
+ source $CINDER_BACKENDS/${BE_TYPE}
+ fi
+ done
+fi
+
+
# Functions
# ---------
@@ -99,41 +115,6 @@
return 1
}
-# _clean_lvm_lv removes all cinder LVM volumes
-#
-# Usage: _clean_lvm_lv $VOLUME_GROUP $VOLUME_NAME_PREFIX
-function _clean_lvm_lv {
- local vg=$1
- local lv_prefix=$2
-
- # Clean out existing volumes
- for lv in `sudo lvs --noheadings -o lv_name $vg`; do
- # lv_prefix prefixes the LVs we want
- if [[ "${lv#$lv_prefix}" != "$lv" ]]; then
- sudo lvremove -f $vg/$lv
- fi
- done
-}
-
-# _clean_lvm_backing_file() removes the backing file of the
-# volume group used by cinder
-#
-# Usage: _clean_lvm_backing_file() $VOLUME_GROUP
-function _clean_lvm_backing_file {
- local vg=$1
-
- # if there is no logical volume left, it's safe to attempt a cleanup
- # of the backing file
- if [ -z "`sudo lvs --noheadings -o lv_name $vg`" ]; then
- # if the backing physical device is a loop device, it was probably setup by devstack
- if [[ -n "$VG_DEV" ]] && [[ -e "$VG_DEV" ]]; then
- VG_DEV=$(sudo losetup -j $DATA_DIR/${vg}-backing-file | awk -F':' '/backing-file/ { print $1}')
- sudo losetup -d $VG_DEV
- rm -f $DATA_DIR/${vg}-backing-file
- fi
- fi
-}
-
# cleanup_cinder() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_cinder {
@@ -160,23 +141,20 @@
done
fi
- if is_service_enabled cinder; then
- sudo rm -rf $CINDER_STATE_PATH/volumes/*
- fi
-
if is_ubuntu; then
stop_service tgt
else
stop_service tgtd
fi
- # Campsite rule: leave behind a volume group at least as clean as we found it
- _clean_lvm_lv $VOLUME_GROUP $VOLUME_NAME_PREFIX
- _clean_lvm_backing_file $VOLUME_GROUP
-
- if [ "$CINDER_MULTI_LVM_BACKEND" = "True" ]; then
- _clean_lvm_lv $VOLUME_GROUP2 $VOLUME_NAME_PREFIX
- _clean_lvm_backing_file $VOLUME_GROUP2
+ if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
+ for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
+ BE_TYPE=${be%%:*}
+ BE_NAME=${be##*:}
+ if type cleanup_cinder_backend_${BE_TYPE} >/dev/null 2>&1; then
+ cleanup_cinder_backend_${BE_TYPE} ${BE_NAME}
+ fi
+ done
fi
}
@@ -243,23 +221,7 @@
iniset $CINDER_CONF DEFAULT auth_strategy keystone
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $CINDER_CONF DEFAULT verbose True
- if [ "$CINDER_MULTI_LVM_BACKEND" = "True" ]; then
- iniset $CINDER_CONF DEFAULT enabled_backends lvmdriver-1,lvmdriver-2
- iniset $CINDER_CONF lvmdriver-1 volume_group $VOLUME_GROUP
- iniset $CINDER_CONF lvmdriver-1 volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
- iniset $CINDER_CONF lvmdriver-1 volume_backend_name LVM_iSCSI
- iniset $CINDER_CONF lvmdriver-2 volume_group $VOLUME_GROUP2
- iniset $CINDER_CONF lvmdriver-2 volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
- iniset $CINDER_CONF lvmdriver-2 volume_backend_name LVM_iSCSI_2
- # NOTE(mriedem): Work around Cinder "wishlist" bug 1255593
- if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
- iniset $CINDER_CONF lvmdriver-1 volume_clear none
- iniset $CINDER_CONF lvmdriver-2 volume_clear none
- fi
- else
- iniset $CINDER_CONF DEFAULT volume_group $VOLUME_GROUP
- iniset $CINDER_CONF DEFAULT volume_name_template ${VOLUME_NAME_PREFIX}%s
- fi
+
iniset $CINDER_CONF DEFAULT my_ip "$CINDER_SERVICE_HOST"
iniset $CINDER_CONF DEFAULT iscsi_helper tgtadm
iniset $CINDER_CONF DEFAULT sql_connection `database_connection_url cinder`
@@ -274,6 +236,26 @@
# supported.
iniset $CINDER_CONF DEFAULT enable_v1_api true
+ if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
+ enabled_backends=""
+ default_name=""
+ for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
+ BE_TYPE=${be%%:*}
+ BE_NAME=${be##*:}
+ if type configure_cinder_backend_${BE_TYPE} >/dev/null 2>&1; then
+ configure_cinder_backend_${BE_TYPE} ${BE_NAME}
+ fi
+ if [[ -z "$default_name" ]]; then
+ default_name=$BE_NAME
+ fi
+ enabled_backends+=$BE_NAME,
+ done
+ iniset $CINDER_CONF DEFAULT enabled_backends ${enabled_backends%,*}
+ if [[ -n "$default_name" ]]; then
+ iniset $CINDER_CONF DEFAULT default_volume_type ${default_name}
+ fi
+ fi
+
if is_service_enabled swift; then
iniset $CINDER_CONF DEFAULT backup_swift_url "http://$SERVICE_HOST:8080/v1/AUTH_"
fi
@@ -339,39 +321,26 @@
# Cinder
if [[ "$ENABLED_SERVICES" =~ "c-api" ]]; then
- CINDER_USER=$(openstack user create \
- cinder \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email cinder@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $CINDER_USER
+
+ CINDER_USER=$(get_or_create_user "cinder" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $CINDER_USER $SERVICE_TENANT
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- CINDER_SERVICE=$(openstack service create \
- cinder \
- --type=volume \
- --description="Cinder Volume Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $CINDER_SERVICE \
- --region RegionOne \
- --publicurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s" \
- --adminurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s" \
- --internalurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s"
- CINDER_V2_SERVICE=$(openstack service create \
- cinderv2 \
- --type=volumev2 \
- --description="Cinder Volume Service V2" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $CINDER_V2_SERVICE \
- --region RegionOne \
- --publicurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s" \
- --adminurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s" \
- --internalurl "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s"
+
+ CINDER_SERVICE=$(get_or_create_service "cinder" \
+ "volume" "Cinder Volume Service")
+ get_or_create_endpoint $CINDER_SERVICE "$REGION_NAME" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v1/\$(tenant_id)s"
+
+ CINDER_V2_SERVICE=$(get_or_create_service "cinderv2" \
+ "volumev2" "Cinder Volume Service V2")
+ get_or_create_endpoint $CINDER_V2_SERVICE "$REGION_NAME" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s" \
+ "$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/\$(tenant_id)s"
fi
fi
}
@@ -384,53 +353,6 @@
rm -f $CINDER_AUTH_CACHE_DIR/*
}
-function create_cinder_volume_group {
- # According to the ``CINDER_MULTI_LVM_BACKEND`` value, configure one or two default volumes
- # group called ``stack-volumes`` (and ``stack-volumes2``) for the volume
- # service if it (they) does (do) not yet exist. If you don't wish to use a
- # file backed volume group, create your own volume group called ``stack-volumes``
- # and ``stack-volumes2`` before invoking ``stack.sh``.
- #
- # The two backing files are ``VOLUME_BACKING_FILE_SIZE`` in size, and they are stored in
- # the ``DATA_DIR``.
-
- if ! sudo vgs $VOLUME_GROUP; then
- if [ -z "$VOLUME_BACKING_DEVICE" ]; then
- # Only create if the file doesn't already exists
- [[ -f $VOLUME_BACKING_FILE ]] || truncate -s $VOLUME_BACKING_FILE_SIZE $VOLUME_BACKING_FILE
- DEV=`sudo losetup -f --show $VOLUME_BACKING_FILE`
-
- # Only create if the loopback device doesn't contain $VOLUME_GROUP
- if ! sudo vgs $VOLUME_GROUP; then
- sudo vgcreate $VOLUME_GROUP $DEV
- fi
- else
- sudo vgcreate $VOLUME_GROUP $VOLUME_BACKING_DEVICE
- fi
- fi
- if [ "$CINDER_MULTI_LVM_BACKEND" = "True" ]; then
- #set up the second volume if CINDER_MULTI_LVM_BACKEND is enabled
-
- if ! sudo vgs $VOLUME_GROUP2; then
- if [ -z "$VOLUME_BACKING_DEVICE2" ]; then
- # Only create if the file doesn't already exists
- [[ -f $VOLUME_BACKING_FILE2 ]] || truncate -s $VOLUME_BACKING_FILE_SIZE $VOLUME_BACKING_FILE2
-
- DEV=`sudo losetup -f --show $VOLUME_BACKING_FILE2`
-
- # Only create if the loopback device doesn't contain $VOLUME_GROUP
- if ! sudo vgs $VOLUME_GROUP2; then
- sudo vgcreate $VOLUME_GROUP2 $DEV
- fi
- else
- sudo vgcreate $VOLUME_GROUP2 $VOLUME_BACKING_DEVICE2
- fi
- fi
- fi
-
- mkdir -p $CINDER_STATE_PATH/volumes
-}
-
# init_cinder() - Initialize database and volume group
function init_cinder {
# Force nova volumes off
@@ -444,26 +366,17 @@
$CINDER_BIN_DIR/cinder-manage db sync
fi
- if is_service_enabled c-vol; then
-
- create_cinder_volume_group
-
- if sudo vgs $VOLUME_GROUP; then
- if is_fedora || is_suse; then
- # service is not started by default
- start_service tgtd
+ if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
+ for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
+ BE_TYPE=${be%%:*}
+ BE_NAME=${be##*:}
+ if type init_cinder_backend_${BE_TYPE} >/dev/null 2>&1; then
+ init_cinder_backend_${BE_TYPE} ${BE_NAME}
fi
-
- # Remove iscsi targets
- sudo tgtadm --op show --mode target | grep $VOLUME_NAME_PREFIX | grep Target | cut -f3 -d ' ' | sudo xargs -n1 tgt-admin --delete || true
- # Start with a clean volume group
- _clean_lvm_lv $VOLUME_GROUP $VOLUME_NAME_PREFIX
- if [ "$CINDER_MULTI_LVM_BACKEND" = "True" ]; then
- _clean_lvm_lv $VOLUME_GROUP2 $VOLUME_NAME_PREFIX
- fi
- fi
+ done
fi
+ mkdir -p $CINDER_STATE_PATH/volumes
create_cinder_cache_dir
}
@@ -515,6 +428,11 @@
fi
screen_it c-api "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-api --config-file $CINDER_CONF"
+ echo "Waiting for Cinder API to start..."
+ if ! wait_for_service $SERVICE_TIMEOUT $CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT; then
+ die $LINENO "c-api did not start"
+ fi
+
screen_it c-sch "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-scheduler --config-file $CINDER_CONF"
screen_it c-bak "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-backup --config-file $CINDER_CONF"
screen_it c-vol "cd $CINDER_DIR && $CINDER_BIN_DIR/cinder-volume --config-file $CINDER_CONF"
@@ -545,6 +463,30 @@
fi
}
+# create_volume_types() - Create Cinder's configured volume types
+function create_volume_types {
+ # Create volume types
+ if is_service_enabled c-api && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
+ for be in ${CINDER_ENABLED_BACKENDS//,/ }; do
+ BE_TYPE=${be%%:*}
+ BE_NAME=${be##*:}
+ if type configure_cinder_backend_${BE_TYPE} >/dev/null 2>&1; then
+ # openstack volume type create --property volume_backend_name="${BE_TYPE}" ${BE_NAME}
+ cinder type-create ${BE_NAME} && \
+ cinder type-key ${BE_NAME} set volume_backend_name="${BE_NAME}"
+ fi
+ done
+ fi
+}
+
+# Compatibility for Grenade
+
+function create_cinder_volume_group {
+ # During a transition period Grenade needs to have this function defined
+ # It is effectively a no-op in the Grenade 'target' use case
+ :
+}
+
# Restore xtrace
$XTRACE
diff --git a/lib/cinder_backends/ceph b/lib/cinder_backends/ceph
new file mode 100644
index 0000000..e9d2a02
--- /dev/null
+++ b/lib/cinder_backends/ceph
@@ -0,0 +1,79 @@
+# lib/cinder_backends/ceph
+# Configure the ceph backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,ceph:ceph
+#
+# Optional parameters:
+# CINDER_BAK_CEPH_POOL=<pool-name>
+# CINDER_BAK_CEPH_USER=<user>
+# CINDER_BAK_CEPH_POOL_PG=<pg-num>
+# CINDER_BAK_CEPH_POOL_PGP=<pgp-num>
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# configure_ceph_backend_lvm - called from configure_cinder()
+
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+CINDER_BAK_CEPH_POOL=${CINDER_BAK_CEPH_POOL:-backups}
+CINDER_BAK_CEPH_POOL_PG=${CINDER_BAK_CEPH_POOL_PG:-8}
+CINDER_BAK_CEPH_POOL_PGP=${CINDER_BAK_CEPH_POOL_PGP:-8}
+CINDER_BAK_CEPH_USER=${CINDER_BAK_CEPH_USER:-cinder-bak}
+
+
+# Entry Points
+# ------------
+
+# configure_cinder_backend_ceph - Set config files, create data dirs, etc
+# configure_cinder_backend_ceph $name
+function configure_cinder_backend_ceph {
+ local be_name=$1
+
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.rbd.RBDDriver"
+ iniset $CINDER_CONF $be_name rbd_ceph_conf "$CEPH_CONF"
+ iniset $CINDER_CONF $be_name rbd_pool "$CINDER_CEPH_POOL"
+ iniset $CINDER_CONF $be_name rbd_user "$CINDER_CEPH_USER"
+ iniset $CINDER_CONF $be_name rbd_uuid "$CINDER_CEPH_UUID"
+ iniset $CINDER_CONF $be_name rbd_flatten_volume_from_snapshot False
+ iniset $CINDER_CONF $be_name rbd_max_clone_depth 5
+ iniset $CINDER_CONF DEFAULT glance_api_version 2
+
+ if is_service_enabled c-bak; then
+ # Configure Cinder backup service options, ceph pool, ceph user and ceph key
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${CINDER_BAK_CEPH_POOL} ${CINDER_BAK_CEPH_POOL_PG} ${CINDER_BAK_CEPH_POOL_PGP}
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_BAK_CEPH_POOL} size ${CEPH_REPLICAS}
+ if [[ $CEPH_REPLICAS -ne 1 ]]; then
+ sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${CINDER_BAK_CEPH_POOL} crush_ruleset ${RULE_ID}
+ fi
+ sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${CINDER_BAK_CEPH_USER} mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=${CINDER_BAK_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${CINDER_BAK_CEPH_USER}.keyring
+ sudo chown $(whoami):$(whoami) ${CEPH_CONF_DIR}/ceph.client.${CINDER_BAK_CEPH_USER}.keyring
+
+ iniset $CINDER_CONF DEFAULT backup_driver "cinder.backup.drivers.ceph"
+ iniset $CINDER_CONF DEFAULT backup_ceph_conf "$CEPH_CONF"
+ iniset $CINDER_CONF DEFAULT backup_ceph_pool "$CINDER_BAK_CEPH_POOL"
+ iniset $CINDER_CONF DEFAULT backup_ceph_user "$CINDER_BAK_CEPH_USER"
+ iniset $CINDER_CONF DEFAULT backup_ceph_stripe_unit 0
+ iniset $CINDER_CONF DEFAULT backup_ceph_stripe_count 0
+ iniset $CINDER_CONF DEFAULT restore_discard_excess_bytes True
+ fi
+}
+
+# Restore xtrace
+$MY_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
new file mode 100644
index 0000000..324c323
--- /dev/null
+++ b/lib/cinder_backends/lvm
@@ -0,0 +1,179 @@
+# lib/cinder_backends/lvm
+# Configure the LVM backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,lvm:lvmname
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# CINDER_CONF
+# DATA_DIR
+
+# clean_cinder_backend_lvm - called from clean_cinder()
+# configure_cinder_backend_lvm - called from configure_cinder()
+# init_cinder_backend_lvm - called from init_cinder()
+
+
+# Save trace setting
+MY_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Defaults
+# --------
+
+# Name of the lvm volume groups to use/create for iscsi volumes
+# This monkey-motion is for compatibility with icehouse-generation Grenade
+# If ``VOLUME_GROUP`` is set, use it, otherwise we'll build a VG name based
+# on ``VOLUME_GROUP_NAME`` that includes the backend name
+# Grenade doesn't use ``VOLUME_GROUP2`` so it is left out
+VOLUME_GROUP_NAME=${VOLUME_GROUP:-${VOLUME_GROUP_NAME:-stack-volumes}}
+
+# TODO: resurrect backing device...need to know how to set values
+#VOLUME_BACKING_DEVICE=${VOLUME_BACKING_DEVICE:-}
+
+VOLUME_NAME_PREFIX=${VOLUME_NAME_PREFIX:-volume-}
+
+
+# Entry Points
+# ------------
+
+# Compatibility for getting a volume group name from either ``VOLUME_GROUP``
+# or from ``VOLUME_GROUP_NAME`` plus the backend name
+function get_volume_group_name {
+ local be_name=$1
+
+ # Again with the icehouse-generation compatibility
+ local volume_group_name=$VOLUME_GROUP_NAME
+ if [[ -z $VOLUME_GROUP ]]; then
+ volume_group_name+="-$be_name"
+ fi
+ echo $volume_group_name
+}
+
+function cleanup_cinder_backend_lvm {
+ local be_name=$1
+
+ # Again with the icehouse-generation compatibility
+ local volume_group_name=$(get_volume_group_name $be_name)
+
+ # Campsite rule: leave behind a volume group at least as clean as we found it
+ _clean_lvm_lv ${volume_group_name} $VOLUME_NAME_PREFIX
+ _clean_lvm_backing_file ${volume_group_name} $DATA_DIR/${volume_group_name}-backing-file
+}
+
+# configure_cinder_backend_lvm - Set config files, create data dirs, etc
+# configure_cinder_backend_lvm $name
+function configure_cinder_backend_lvm {
+ local be_name=$1
+
+ # Again with the icehouse-generation compatibility
+ local volume_group_name=$(get_volume_group_name $be_name)
+
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMISCSIDriver"
+ iniset $CINDER_CONF $be_name volume_group $volume_group_name
+
+ if [[ "$CINDER_SECURE_DELETE" == "False" ]]; then
+ iniset $CINDER_CONF $be_name volume_clear none
+ fi
+}
+
+
+function init_cinder_backend_lvm {
+ local be_name=$1
+
+ # Again with the icehouse-generation compatibility
+ local volume_group_name=$(get_volume_group_name $be_name)
+
+ # Start with a clean volume group
+ _create_cinder_volume_group ${volume_group_name} $DATA_DIR/${volume_group_name}-backing-file
+
+ if is_fedora || is_suse; then
+ # service is not started by default
+ start_service tgtd
+ fi
+
+ # Remove iscsi targets
+ sudo tgtadm --op show --mode target | grep $VOLUME_NAME_PREFIX | grep Target | cut -f3 -d ' ' | sudo xargs -n1 tgt-admin --delete || true
+ _clean_lvm_lv ${volume_group_name} $VOLUME_NAME_PREFIX
+}
+
+
+# _clean_lvm_lv removes all cinder LVM volumes
+#
+# Usage: _clean_lvm_lv volume-group-name $VOLUME_NAME_PREFIX
+function _clean_lvm_lv {
+ local vg=$1
+ local lv_prefix=$2
+
+ # Clean out existing volumes
+ for lv in $(sudo lvs --noheadings -o lv_name $vg 2>/dev/null); do
+ # lv_prefix prefixes the LVs we want
+ if [[ "${lv#$lv_prefix}" != "$lv" ]]; then
+ sudo lvremove -f $vg/$lv
+ fi
+ done
+}
+
+# _clean_lvm_backing_file() removes the backing file of the
+# volume group used by cinder
+#
+# Usage: _clean_lvm_backing_file() volume-group-name backing-file-name
+function _clean_lvm_backing_file {
+ local vg=$1
+ local backing_file=$2
+
+ # if there is no logical volume left, it's safe to attempt a cleanup
+ # of the backing file
+ if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
+ # if the backing physical device is a loop device, it was probably setup by devstack
+ VG_DEV=$(sudo losetup -j $backing_file | awk -F':' '/backing-file/ { print $1}')
+ if [[ -n "$VG_DEV" ]] && [[ -e "$VG_DEV" ]]; then
+ sudo losetup -d $VG_DEV
+ rm -f $backing_file
+ fi
+ fi
+}
+
+# _create_cinder_volume_group volume-group-name backing-file-name
+function _create_cinder_volume_group {
+ # According to the ``CINDER_MULTI_LVM_BACKEND`` value, configure one or two default volumes
+ # group called ``stack-volumes`` (and ``stack-volumes2``) for the volume
+ # service if it (they) does (do) not yet exist. If you don't wish to use a
+ # file backed volume group, create your own volume group called ``stack-volumes``
+ # and ``stack-volumes2`` before invoking ``stack.sh``.
+ #
+ # The two backing files are ``VOLUME_BACKING_FILE_SIZE`` in size, and they are stored in
+ # the ``DATA_DIR``.
+
+ local vg_name=$1
+ local backing_file=$2
+
+ if ! sudo vgs $vg_name; then
+ # TODO: fix device handling
+ if [ -z "$VOLUME_BACKING_DEVICE" ]; then
+ # Only create if the file doesn't already exists
+ [[ -f $backing_file ]] || truncate -s $VOLUME_BACKING_FILE_SIZE $backing_file
+ DEV=`sudo losetup -f --show $backing_file`
+
+ # Only create if the loopback device doesn't contain $VOLUME_GROUP
+ if ! sudo vgs $vg_name; then
+ sudo vgcreate $vg_name $DEV
+ fi
+ else
+ sudo vgcreate $vg_name $VOLUME_BACKING_DEVICE
+ fi
+ fi
+}
+
+
+# Restore xtrace
+$MY_XTRACE
+
+# mode: shell-script
+# End:
diff --git a/lib/cinder_backends/nfs b/lib/cinder_backends/nfs
new file mode 100644
index 0000000..7648788
--- /dev/null
+++ b/lib/cinder_backends/nfs
@@ -0,0 +1,43 @@
+# lib/cinder_backends/nfs
+# Configure the nfs backend
+
+# Enable with:
+#
+# CINDER_ENABLED_BACKENDS+=,nfs:<volume-type-name>
+
+# Dependencies:
+#
+# - ``functions`` file
+# - ``cinder`` configurations
+
+# CINDER_CONF
+# CINDER_CONF_DIR
+# CINDER_NFS_SERVERPATH - contents of nfs shares config file
+
+# configure_cinder_backend_nfs - Configure Cinder for NFS backends
+
+# Save trace setting
+NFS_XTRACE=$(set +o | grep xtrace)
+set +o xtrace
+
+
+# Entry Points
+# ------------
+
+# configure_cinder_backend_nfs - Set config files, create data dirs, etc
+function configure_cinder_backend_nfs {
+ local be_name=$1
+ iniset $CINDER_CONF $be_name volume_backend_name $be_name
+ iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.nfs.NfsDriver"
+ iniset $CINDER_CONF $be_name nfs_shares_config "$CINDER_CONF_DIR/nfs-shares-$be_name.conf"
+
+ echo "$CINDER_NFS_SERVERPATH" | tee "$CINDER_CONF_DIR/nfs-shares-$be_name.conf"
+}
+
+
+# Restore xtrace
+$NFS_XTRACE
+
+# Local variables:
+# mode: shell-script
+# End:
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 0ccfce5..67bf85a 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -47,22 +47,22 @@
}
function configure_database_mysql {
- local slow_log
+ local my_conf mysql slow_log
echo_summary "Configuring and starting MySQL"
if is_ubuntu; then
- MY_CONF=/etc/mysql/my.cnf
- MYSQL=mysql
+ my_conf=/etc/mysql/my.cnf
+ mysql=mysql
elif is_fedora; then
if [[ $DISTRO =~ (rhel7) ]]; then
- MYSQL=mariadb
+ mysql=mariadb
else
- MYSQL=mysqld
+ mysql=mysqld
fi
- MY_CONF=/etc/my.cnf
+ my_conf=/etc/my.cnf
elif is_suse; then
- MY_CONF=/etc/my.cnf
- MYSQL=mysql
+ my_conf=/etc/my.cnf
+ mysql=mysql
else
exit_distro_not_supported "mysql configuration"
fi
@@ -70,7 +70,7 @@
# Start mysql-server
if is_fedora || is_suse; then
# service is not started by default
- start_service $MYSQL
+ start_service $mysql
fi
# Set the root password - only works the first time. For Ubuntu, we already
@@ -87,9 +87,9 @@
# Change ‘bind-address’ from localhost (127.0.0.1) to any (0.0.0.0) and
# set default db type to InnoDB
sudo bash -c "source $TOP_DIR/functions && \
- iniset $MY_CONF mysqld bind-address 0.0.0.0 && \
- iniset $MY_CONF mysqld sql_mode STRICT_ALL_TABLES && \
- iniset $MY_CONF mysqld default-storage-engine InnoDB"
+ iniset $my_conf mysqld bind-address 0.0.0.0 && \
+ iniset $my_conf mysqld sql_mode STRICT_ALL_TABLES && \
+ iniset $my_conf mysqld default-storage-engine InnoDB"
if [[ "$DATABASE_QUERY_LOGGING" == "True" ]]; then
@@ -102,19 +102,19 @@
sudo sed -e '/log.slow.queries/d' \
-e '/long.query.time/d' \
-e '/log.queries.not.using.indexes/d' \
- -i $MY_CONF
+ -i $my_conf
# Turn on slow query log, log all queries (any query taking longer than
# 0 seconds) and log all non-indexed queries
sudo bash -c "source $TOP_DIR/functions && \
- iniset $MY_CONF mysqld slow-query-log 1 && \
- iniset $MY_CONF mysqld slow-query-log-file $slow_log && \
- iniset $MY_CONF mysqld long-query-time 0 && \
- iniset $MY_CONF mysqld log-queries-not-using-indexes 1"
+ iniset $my_conf mysqld slow-query-log 1 && \
+ iniset $my_conf mysqld slow-query-log-file $slow_log && \
+ iniset $my_conf mysqld long-query-time 0 && \
+ iniset $my_conf mysqld log-queries-not-using-indexes 1"
fi
- restart_service $MYSQL
+ restart_service $mysql
}
function install_database_mysql {
diff --git a/lib/databases/postgresql b/lib/databases/postgresql
index b39984c..fb6d304 100644
--- a/lib/databases/postgresql
+++ b/lib/databases/postgresql
@@ -10,6 +10,9 @@
set +o xtrace
+MAX_DB_CONNECTIONS=${MAX_DB_CONNECTIONS:-200}
+
+
register_database postgresql
@@ -39,11 +42,12 @@
}
function configure_database_postgresql {
+ local pg_conf pg_dir pg_hba root_roles
echo_summary "Configuring and starting PostgreSQL"
if is_fedora; then
- PG_HBA=/var/lib/pgsql/data/pg_hba.conf
- PG_CONF=/var/lib/pgsql/data/postgresql.conf
- if ! sudo [ -e $PG_HBA ]; then
+ pg_hba=/var/lib/pgsql/data/pg_hba.conf
+ pg_conf=/var/lib/pgsql/data/postgresql.conf
+ if ! sudo [ -e $pg_hba ]; then
if ! [[ $DISTRO =~ (rhel6) ]]; then
sudo postgresql-setup initdb
else
@@ -51,23 +55,25 @@
fi
fi
elif is_ubuntu; then
- PG_DIR=`find /etc/postgresql -name pg_hba.conf|xargs dirname`
- PG_HBA=$PG_DIR/pg_hba.conf
- PG_CONF=$PG_DIR/postgresql.conf
+ pg_dir=`find /etc/postgresql -name pg_hba.conf|xargs dirname`
+ pg_hba=$pg_dir/pg_hba.conf
+ pg_conf=$pg_dir/postgresql.conf
elif is_suse; then
- PG_HBA=/var/lib/pgsql/data/pg_hba.conf
- PG_CONF=/var/lib/pgsql/data/postgresql.conf
+ pg_hba=/var/lib/pgsql/data/pg_hba.conf
+ pg_conf=/var/lib/pgsql/data/postgresql.conf
# initdb is called when postgresql is first started
- sudo [ -e $PG_HBA ] || start_service postgresql
+ sudo [ -e $pg_hba ] || start_service postgresql
else
exit_distro_not_supported "postgresql configuration"
fi
# Listen on all addresses
- sudo sed -i "/listen_addresses/s/.*/listen_addresses = '*'/" $PG_CONF
+ sudo sed -i "/listen_addresses/s/.*/listen_addresses = '*'/" $pg_conf
+ # Set max_connections
+ sudo sed -i "/max_connections/s/.*/max_connections = $MAX_DB_CONNECTIONS/" $pg_conf
# Do password auth from all IPv4 clients
- sudo sed -i "/^host/s/all\s\+127.0.0.1\/32\s\+ident/$DATABASE_USER\t0.0.0.0\/0\tpassword/" $PG_HBA
+ sudo sed -i "/^host/s/all\s\+127.0.0.1\/32\s\+ident/$DATABASE_USER\t0.0.0.0\/0\tpassword/" $pg_hba
# Do password auth for all IPv6 clients
- sudo sed -i "/^host/s/all\s\+::1\/128\s\+ident/$DATABASE_USER\t::0\/0\tpassword/" $PG_HBA
+ sudo sed -i "/^host/s/all\s\+::1\/128\s\+ident/$DATABASE_USER\t::0\/0\tpassword/" $pg_hba
restart_service postgresql
# Create the role if it's not here or else alter it.
@@ -81,14 +87,14 @@
function install_database_postgresql {
echo_summary "Installing postgresql"
- PGPASS=$HOME/.pgpass
- if [[ ! -e $PGPASS ]]; then
- cat <<EOF > $PGPASS
+ local pgpass=$HOME/.pgpass
+ if [[ ! -e $pgpass ]]; then
+ cat <<EOF > $pgpass
*:*:*:$DATABASE_USER:$DATABASE_PASSWORD
EOF
- chmod 0600 $PGPASS
+ chmod 0600 $pgpass
else
- sed -i "s/:root:\w\+/:root:$DATABASE_PASSWORD/" $PGPASS
+ sed -i "s/:root:\w\+/:root:$DATABASE_PASSWORD/" $pgpass
fi
if is_ubuntu; then
install_package postgresql
diff --git a/lib/glance b/lib/glance
index 4eb0ada..92577d9 100644
--- a/lib/glance
+++ b/lib/glance
@@ -164,36 +164,28 @@
function create_glance_accounts {
if is_service_enabled g-api; then
- openstack user create \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT_NAME \
- glance
- openstack role add \
- --project $SERVICE_TENANT_NAME \
- --user glance \
- service
+
+ GLANCE_USER=$(get_or_create_user "glance" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME)
+ get_or_add_user_role service $GLANCE_USER $SERVICE_TENANT_NAME
+
# required for swift access
if is_service_enabled s-proxy; then
- openstack user create \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT_NAME \
- glance-swift
- openstack role add \
- --project $SERVICE_TENANT_NAME \
- --user glance-swift \
- ResellerAdmin
+
+ GLANCE_SWIFT_USER=$(get_or_create_user "glance-swift" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT_NAME "glance-swift@example.com")
+ get_or_add_user_role "ResellerAdmin" $GLANCE_SWIFT_USER $SERVICE_TENANT_NAME
fi
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- openstack service create \
- --type image \
- --description "Glance Image Service" \
- glance
- openstack endpoint create \
- --region RegionOne \
- --publicurl "http://$GLANCE_HOSTPORT" \
- --adminurl "http://$GLANCE_HOSTPORT" \
- --internalurl "http://$GLANCE_HOSTPORT" \
- glance
+
+ GLANCE_SERVICE=$(get_or_create_service "glance" \
+ "image" "Glance Image Service")
+ get_or_create_endpoint $GLANCE_SERVICE \
+ "$REGION_NAME" \
+ "http://$GLANCE_HOSTPORT" \
+ "http://$GLANCE_HOSTPORT" \
+ "http://$GLANCE_HOSTPORT"
fi
fi
}
diff --git a/lib/heat b/lib/heat
index b8c0359..b6124c0 100644
--- a/lib/heat
+++ b/lib/heat
@@ -98,6 +98,8 @@
iniset $HEAT_CONF database connection `database_connection_url heat`
iniset $HEAT_CONF DEFAULT auth_encryption_key `hexdump -n 16 -v -e '/1 "%02x"' /dev/urandom`
+ iniset $HEAT_CONF DEFAULT region_name_for_services "$REGION_NAME"
+
# logging
iniset $HEAT_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $HEAT_CONF DEFAULT use_syslog $SYSLOG
@@ -188,6 +190,7 @@
# stop_heat() - Stop running processes
function stop_heat {
# Kill the screen windows
+ local serv
for serv in h-eng h-api h-api-cfn h-api-cw; do
screen_stop $serv
done
@@ -211,82 +214,75 @@
# create_heat_accounts() - Set up common required heat accounts
function create_heat_accounts {
# migrated from files/keystone_data.sh
- SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
+ local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
+ local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
- HEAT_USER=$(openstack user create \
- heat \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email heat@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $HEAT_USER
+ local heat_user=$(get_or_create_user "heat" \
+ "$SERVICE_PASSWORD" $service_tenant)
+ get_or_add_user_role $admin_role $heat_user $service_tenant
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- HEAT_SERVICE=$(openstack service create \
- heat \
- --type=orchestration \
- --description="Heat Orchestration Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $HEAT_SERVICE \
- --region RegionOne \
- --publicurl "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
- --adminurl "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
- --internalurl "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
- HEAT_CFN_SERVICE=$(openstack service create \
- heat \
- --type=cloudformation \
- --description="Heat CloudFormation Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $HEAT_CFN_SERVICE \
- --region RegionOne \
- --publicurl "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
- --adminurl "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
- --internalurl "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
+
+ local heat_service=$(get_or_create_service "heat" \
+ "orchestration" "Heat Orchestration Service")
+ get_or_create_endpoint $heat_service \
+ "$REGION_NAME" \
+ "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+ "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s" \
+ "$SERVICE_PROTOCOL://$HEAT_API_HOST:$HEAT_API_PORT/v1/\$(tenant_id)s"
+
+ local heat_cfn_service=$(get_or_create_service "heat-cfn" \
+ "cloudformation" "Heat CloudFormation Service")
+ get_or_create_endpoint $heat_cfn_service \
+ "$REGION_NAME" \
+ "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+ "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1" \
+ "$SERVICE_PROTOCOL://$HEAT_API_CFN_HOST:$HEAT_API_CFN_PORT/v1"
fi
# heat_stack_user role is for users created by Heat
- openstack role create heat_stack_user
+ get_or_create_role "heat_stack_user"
if [[ $HEAT_DEFERRED_AUTH == trusts ]]; then
+
# heat_stack_owner role is given to users who create Heat stacks,
# it's the default role used by heat to delegate to the heat service
# user (for performing deferred operations via trusts), see heat.conf
- HEAT_OWNER_ROLE=$(openstack role create \
- heat_stack_owner \
- | grep " id " | get_field 2)
+ local heat_owner_role=$(get_or_create_role "heat_stack_owner")
# Give the role to the demo and admin users so they can create stacks
# in either of the projects created by devstack
- openstack role add $HEAT_OWNER_ROLE --project demo --user demo
- openstack role add $HEAT_OWNER_ROLE --project demo --user admin
- openstack role add $HEAT_OWNER_ROLE --project admin --user admin
+ get_or_add_user_role $heat_owner_role demo demo
+ get_or_add_user_role $heat_owner_role admin demo
+ get_or_add_user_role $heat_owner_role admin admin
iniset $HEAT_CONF DEFAULT deferred_auth_method trusts
fi
if [[ "$HEAT_STACK_DOMAIN" == "True" ]]; then
# Note we have to pass token/endpoint here because the current endpoint and
# version negotiation in OSC means just --os-identity-api-version=3 won't work
- KS_ENDPOINT_V3="$KEYSTONE_SERVICE_URI/v3"
- D_ID=$(openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 \
- --os-identity-api-version=3 domain create heat \
- --description "Owns users and projects created by heat" \
- | grep ' id ' | get_field 2)
- iniset $HEAT_CONF DEFAULT stack_user_domain ${D_ID}
+ local ks_endpoint_v3="$KEYSTONE_SERVICE_URI/v3"
- openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 \
- --os-identity-api-version=3 user create --password $SERVICE_PASSWORD \
- --domain $D_ID heat_domain_admin \
- --description "Manages users and projects created by heat"
- openstack --os-token $OS_TOKEN --os-url=$KS_ENDPOINT_V3 \
- --os-identity-api-version=3 role add \
- --user heat_domain_admin --domain ${D_ID} admin
- iniset $HEAT_CONF DEFAULT stack_domain_admin heat_domain_admin
- iniset $HEAT_CONF DEFAULT stack_domain_admin_password $SERVICE_PASSWORD
+ D_ID=$(openstack --os-token $OS_TOKEN --os-url=$ks_endpoint_v3 \
+ --os-identity-api-version=3 domain list | grep ' heat ' | get_field 1)
+
+ if [[ -z "$D_ID" ]]; then
+ D_ID=$(openstack --os-token $OS_TOKEN --os-url=$ks_endpoint_v3 \
+ --os-identity-api-version=3 domain create heat \
+ --description "Owns users and projects created by heat" \
+ | grep ' id ' | get_field 2)
+ iniset $HEAT_CONF DEFAULT stack_user_domain ${D_ID}
+
+ openstack --os-token $OS_TOKEN --os-url=$ks_endpoint_v3 \
+ --os-identity-api-version=3 user create --password $SERVICE_PASSWORD \
+ --domain $D_ID heat_domain_admin \
+ --description "Manages users and projects created by heat"
+ openstack --os-token $OS_TOKEN --os-url=$ks_endpoint_v3 \
+ --os-identity-api-version=3 role add \
+ --user heat_domain_admin --domain ${D_ID} admin
+ iniset $HEAT_CONF DEFAULT stack_domain_admin heat_domain_admin
+ iniset $HEAT_CONF DEFAULT stack_domain_admin_password $SERVICE_PASSWORD
+ fi
fi
}
diff --git a/lib/infra b/lib/infra
index e2f7dad..e18c66e 100644
--- a/lib/infra
+++ b/lib/infra
@@ -10,7 +10,6 @@
# ``stack.sh`` calls the entry points in this order:
#
-# - unfubar_setuptools
# - install_infra
# Save trace setting
@@ -26,19 +25,6 @@
# Entry Points
# ------------
-# unfubar_setuptools() - Unbreak the giant mess that is the current state of setuptools
-function unfubar_setuptools {
- # this is a giant game of who's on first, but it does consistently work
- # there is hope that upstream python packaging fixes this in the future
- echo_summary "Unbreaking setuptools"
- pip_install -U setuptools
- pip_install -U pip
- uninstall_package python-setuptools
- pip_install -U setuptools
- pip_install -U pip
-}
-
-
# install_infra() - Collect source and prepare
function install_infra {
# bring down global requirements
diff --git a/lib/ironic b/lib/ironic
index dbeb3d3..8b5bdec 100644
--- a/lib/ironic
+++ b/lib/ironic
@@ -53,12 +53,7 @@
IRONIC_VM_SSH_ADDRESS=${IRONIC_VM_SSH_ADDRESS:-$HOST_IP}
IRONIC_VM_COUNT=${IRONIC_VM_COUNT:-1}
IRONIC_VM_SPECS_CPU=${IRONIC_VM_SPECS_CPU:-1}
-# NOTE(adam_g): Kernels 3.12 and newer user tmpfs by default for initramfs.
-# DIB produced ramdisks tend to be ~250MB but tmpfs will only allow
-# use of 50% of available memory before ENOSPC. Set minimum 1GB
-# for nodes to avoid (LP: #1311987) and ensure consistency across
-# older and newer kernels.
-IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-1024}
+IRONIC_VM_SPECS_RAM=${IRONIC_VM_SPECS_RAM:-512}
IRONIC_VM_SPECS_DISK=${IRONIC_VM_SPECS_DISK:-10}
IRONIC_VM_EPHEMERAL_DISK=${IRONIC_VM_EPHEMERAL_DISK:-0}
IRONIC_VM_EMULATOR=${IRONIC_VM_EMULATOR:-/usr/bin/qemu-system-x86_64}
@@ -115,6 +110,7 @@
function install_ironicclient {
git_clone $IRONICCLIENT_REPO $IRONICCLIENT_DIR $IRONICCLIENT_BRANCH
setup_develop $IRONICCLIENT_DIR
+ sudo install -D -m 0644 -o $STACK_USER {$IRONICCLIENT_DIR/tools/,/etc/bash_completion.d/}ironic.bash_completion
}
# cleanup_ironic() - Remove residual data files, anything left over from previous
@@ -179,15 +175,15 @@
function configure_ironic_conductor {
cp $IRONIC_DIR/etc/ironic/rootwrap.conf $IRONIC_ROOTWRAP_CONF
cp -r $IRONIC_DIR/etc/ironic/rootwrap.d $IRONIC_CONF_DIR
- IRONIC_ROOTWRAP=$(get_rootwrap_location ironic)
- ROOTWRAP_ISUDOER_CMD="$IRONIC_ROOTWRAP $IRONIC_CONF_DIR/rootwrap.conf *"
+ local ironic_rootwrap=$(get_rootwrap_location ironic)
+ local rootwrap_isudoer_cmd="$ironic_rootwrap $IRONIC_CONF_DIR/rootwrap.conf *"
# Set up the rootwrap sudoers for ironic
- TEMPFILE=`mktemp`
- echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_ISUDOER_CMD" >$TEMPFILE
- chmod 0440 $TEMPFILE
- sudo chown root:root $TEMPFILE
- sudo mv $TEMPFILE /etc/sudoers.d/ironic-rootwrap
+ local tempfile=`mktemp`
+ echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_isudoer_cmd" >$tempfile
+ chmod 0440 $tempfile
+ sudo chown root:root $tempfile
+ sudo mv $tempfile /etc/sudoers.d/ironic-rootwrap
iniset $IRONIC_CONF_FILE DEFAULT rootwrap_config $IRONIC_ROOTWRAP_CONF
iniset $IRONIC_CONF_FILE DEFAULT enabled_drivers $IRONIC_ENABLED_DRIVERS
@@ -218,33 +214,26 @@
# service ironic admin # if enabled
function create_ironic_accounts {
- SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
+ local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
+ local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
# Ironic
if [[ "$ENABLED_SERVICES" =~ "ir-api" ]]; then
- IRONIC_USER=$(openstack user create \
- ironic \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email ironic@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $IRONIC_USER
+ # Get ironic user if exists
+
+ local ironic_user=$(get_or_create_user "ironic" \
+ "$SERVICE_PASSWORD" $service_tenant)
+ get_or_add_user_role $admin_role $ironic_user $service_tenant
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- IRONIC_SERVICE=$(openstack service create \
- ironic \
- --type=baremetal \
- --description="Ironic baremetal provisioning service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $IRONIC_SERVICE \
- --region RegionOne \
- --publicurl "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
- --adminurl "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
- --internalurl "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT"
+
+ local ironic_service=$(get_or_create_service "ironic" \
+ "baremetal" "Ironic baremetal provisioning service")
+ get_or_create_endpoint $ironic_service \
+ "$REGION_NAME" \
+ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
+ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT" \
+ "$IRONIC_SERVICE_PROTOCOL://$IRONIC_HOSTPORT"
fi
fi
}
@@ -312,15 +301,15 @@
sudo chown -R $STACK_USER $IRONIC_DATA_DIR $IRONIC_STATE_PATH
sudo chown -R $STACK_USER:$LIBVIRT_GROUP $IRONIC_TFTPBOOT_DIR
if is_ubuntu; then
- PXEBIN=/usr/lib/syslinux/pxelinux.0
+ local pxebin=/usr/lib/syslinux/pxelinux.0
elif is_fedora; then
- PXEBIN=/usr/share/syslinux/pxelinux.0
+ local pxebin=/usr/share/syslinux/pxelinux.0
fi
- if [ ! -f $PXEBIN ]; then
+ if [ ! -f $pxebin ]; then
die $LINENO "pxelinux.0 (from SYSLINUX) not found."
fi
- cp $PXEBIN $IRONIC_TFTPBOOT_DIR
+ cp $pxebin $IRONIC_TFTPBOOT_DIR
mkdir -p $IRONIC_TFTPBOOT_DIR/pxelinux.cfg
}
@@ -328,20 +317,20 @@
# Call libvirt setup scripts in a new shell to ensure any new group membership
sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/setup-network"
if [[ "$IRONIC_VM_LOG_CONSOLE" == "True" ]] ; then
- LOG_ARG="$IRONIC_VM_LOG_DIR"
+ local log_arg="$IRONIC_VM_LOG_DIR"
else
- LOG_ARG=""
+ local log_arg=""
fi
sudo su $STACK_USER -c "$IRONIC_SCRIPTS_DIR/create-nodes \
$IRONIC_VM_SPECS_CPU $IRONIC_VM_SPECS_RAM $IRONIC_VM_SPECS_DISK \
amd64 $IRONIC_VM_COUNT $IRONIC_VM_NETWORK_BRIDGE $IRONIC_VM_EMULATOR \
- $LOG_ARG" >> $IRONIC_VM_MACS_CSV_FILE
+ $log_arg" >> $IRONIC_VM_MACS_CSV_FILE
}
function enroll_vms {
- CHASSIS_ID=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2)
- IRONIC_NET_ID=$(neutron net-list | grep private | get_field 1)
+ local chassis_id=$(ironic chassis-create -d "ironic test chassis" | grep " uuid " | get_field 2)
+ local ironic_net_id=$(neutron net-list | grep private | get_field 1)
local idx=0
# work around; need to know what netns neutron uses for private network.
@@ -350,11 +339,11 @@
# the instances operation. If we don't do this, the first port creation
# only happens in the middle of fake baremetal instance's spawning by nova,
# so we'll end up with unbootable fake baremetal VM due to broken PXE.
- PORT_ID=$(neutron port-create private | grep " id " | get_field 2)
+ local port_id=$(neutron port-create private | grep " id " | get_field 2)
while read MAC; do
- NODE_ID=$(ironic node-create --chassis_uuid $CHASSIS_ID --driver pxe_ssh \
+ local node_id=$(ironic node-create --chassis_uuid $chassis_id --driver pxe_ssh \
-i pxe_deploy_kernel=$IRONIC_DEPLOY_KERNEL_ID \
-i pxe_deploy_ramdisk=$IRONIC_DEPLOY_RAMDISK_ID \
-i ssh_virt_type=$IRONIC_SSH_VIRT_TYPE \
@@ -368,14 +357,14 @@
-p cpu_arch=x86_64 \
| grep " uuid " | get_field 2)
- ironic port-create --address $MAC --node_uuid $NODE_ID
+ ironic port-create --address $MAC --node_uuid $node_id
idx=$((idx+1))
done < $IRONIC_VM_MACS_CSV_FILE
# create the nova flavor
- adjusted_disk=$(($IRONIC_VM_SPECS_DISK - $IRONIC_VM_EPHEMERAL_DISK))
+ local adjusted_disk=$(($IRONIC_VM_SPECS_DISK - $IRONIC_VM_EPHEMERAL_DISK))
nova flavor-create --ephemeral $IRONIC_VM_EPHEMERAL_DISK baremetal auto $IRONIC_VM_SPECS_RAM $adjusted_disk $IRONIC_VM_SPECS_CPU
# TODO(lucasagomes): Remove the 'baremetal:deploy_kernel_id'
# and 'baremetal:deploy_ramdisk_id' parameters
@@ -385,8 +374,8 @@
# intentional sleep to make sure the tag has been set to port
sleep 10
- TAPDEV=$(sudo ip netns exec qdhcp-${IRONIC_NET_ID} ip link list | grep tap | cut -d':' -f2 | cut -b2-)
- TAG_ID=$(sudo ovs-vsctl show |grep ${TAPDEV} -A1 -m1 | grep tag | cut -d':' -f2 | cut -b2-)
+ local tapdev=$(sudo ip netns exec qdhcp-${ironic_net_id} ip link list | grep tap | cut -d':' -f2 | cut -b2-)
+ local tag_id=$(sudo ovs-vsctl show |grep ${tapdev} -A1 -m1 | grep tag | cut -d':' -f2 | cut -b2-)
# make sure veth pair is not existing, otherwise delete its links
sudo ip link show ovs-tap1 && sudo ip link delete ovs-tap1
@@ -396,12 +385,12 @@
sudo ip link set dev brbm-tap1 up
sudo ip link set dev ovs-tap1 up
- sudo ovs-vsctl -- --if-exists del-port ovs-tap1 -- add-port br-int ovs-tap1 tag=$TAG_ID
+ sudo ovs-vsctl -- --if-exists del-port ovs-tap1 -- add-port br-int ovs-tap1 tag=$tag_id
sudo ovs-vsctl -- --if-exists del-port brbm-tap1 -- add-port $IRONIC_VM_NETWORK_BRIDGE brbm-tap1
# Remove the port needed only for workaround. For additional info read the
# comment at the beginning of this function
- neutron port-delete $PORT_ID
+ neutron port-delete $port_id
}
function configure_iptables {
@@ -415,11 +404,11 @@
function configure_tftpd {
if is_ubuntu; then
- PXEBIN=/usr/lib/syslinux/pxelinux.0
+ local pxebin=/usr/lib/syslinux/pxelinux.0
elif is_fedora; then
- PXEBIN=/usr/share/syslinux/pxelinux.0
+ local pxebin=/usr/share/syslinux/pxelinux.0
fi
- if [ ! -f $PXEBIN ]; then
+ if [ ! -f $pxebin ]; then
die $LINENO "pxelinux.0 (from SYSLINUX) not found."
fi
@@ -452,12 +441,12 @@
}
function ironic_ssh_check {
- local KEY_FILE=$1
- local FLOATING_IP=$2
- local PORT=$3
- local DEFAULT_INSTANCE_USER=$4
- local ACTIVE_TIMEOUT=$5
- if ! timeout $ACTIVE_TIMEOUT sh -c "while ! ssh -p $PORT -o StrictHostKeyChecking=no -i $KEY_FILE ${DEFAULT_INSTANCE_USER}@$FLOATING_IP echo success; do sleep 1; done"; then
+ local key_file=$1
+ local floating_ip=$2
+ local port=$3
+ local default_instance_user=$4
+ local active_timeout=$5
+ if ! timeout $active_timeout sh -c "while ! ssh -p $port -o StrictHostKeyChecking=no -i $key_file ${default_instance_user}@$floating_ip echo success; do sleep 1; done"; then
die $LINENO "server didn't become ssh-able!"
fi
}
@@ -469,16 +458,17 @@
}
# build deploy kernel+ramdisk, then upload them to glance
-# this function sets IRONIC_DEPLOY_KERNEL_ID and IRONIC_DEPLOY_RAMDISK_ID
+# this function sets ``IRONIC_DEPLOY_KERNEL_ID``, ``IRONIC_DEPLOY_RAMDISK_ID``
function upload_baremetal_ironic_deploy {
- token=$1
+ local token=$1
+ declare -g IRONIC_DEPLOY_KERNEL_ID IRONIC_DEPLOY_RAMDISK_ID
if [ -z "$IRONIC_DEPLOY_KERNEL" -o -z "$IRONIC_DEPLOY_RAMDISK" ]; then
- IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy.kernel
- IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy.initramfs
+ local IRONIC_DEPLOY_KERNEL_PATH=$TOP_DIR/files/ir-deploy.kernel
+ local IRONIC_DEPLOY_RAMDISK_PATH=$TOP_DIR/files/ir-deploy.initramfs
else
- IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL
- IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK
+ local IRONIC_DEPLOY_KERNEL_PATH=$IRONIC_DEPLOY_KERNEL
+ local IRONIC_DEPLOY_RAMDISK_PATH=$IRONIC_DEPLOY_RAMDISK
fi
if [ ! -e "$IRONIC_DEPLOY_RAMDISK_PATH" -o ! -e "$IRONIC_DEPLOY_KERNEL_PATH" ]; then
@@ -519,19 +509,20 @@
git_clone $DIB_REPO $DIB_DIR $DIB_BRANCH
# make sure all needed service were enabled
+ local srv
for srv in nova glance key neutron; do
if ! is_service_enabled "$srv"; then
die $LINENO "$srv should be enabled for ironic tests"
fi
done
- TOKEN=$(keystone token-get | grep ' id ' | get_field 2)
- die_if_not_set $LINENO TOKEN "Keystone fail to get token"
+ local token=$(keystone token-get | grep ' id ' | get_field 2)
+ die_if_not_set $LINENO token "Keystone fail to get token"
echo_summary "Creating and uploading baremetal images for ironic"
# build and upload separate deploy kernel & ramdisk
- upload_baremetal_ironic_deploy $TOKEN
+ upload_baremetal_ironic_deploy $token
create_bridge_and_vms
enroll_vms
@@ -547,9 +538,9 @@
function cleanup_baremetal_basic_ops {
rm -f $IRONIC_VM_MACS_CSV_FILE
if [ -f $IRONIC_KEY_FILE ]; then
- KEY=`cat $IRONIC_KEY_FILE.pub`
+ local key=$(cat $IRONIC_KEY_FILE.pub)
# remove public key from authorized_keys
- grep -v "$KEY" $IRONIC_AUTHORIZED_KEYS_FILE > temp && mv temp $IRONIC_AUTHORIZED_KEYS_FILE
+ grep -v "$key" $IRONIC_AUTHORIZED_KEYS_FILE > temp && mv temp $IRONIC_AUTHORIZED_KEYS_FILE
chmod 0600 $IRONIC_AUTHORIZED_KEYS_FILE
fi
sudo rm -rf $IRONIC_DATA_DIR $IRONIC_STATE_PATH
diff --git a/lib/keystone b/lib/keystone
index 8a4683f..547646a 100644
--- a/lib/keystone
+++ b/lib/keystone
@@ -46,6 +46,9 @@
# Example of KEYSTONE_EXTENSIONS=oauth1,federation
KEYSTONE_EXTENSIONS=${KEYSTONE_EXTENSIONS:-}
+# Toggle for deploying Keystone under HTTPD + mod_wsgi
+KEYSTONE_USE_MOD_WSGI=${KEYSTONE_USE_MOD_WSGI:-${ENABLE_HTTPD_MOD_WSGI_SERVICES}}
+
# Select the backend for Keystone's service catalog
KEYSTONE_CATALOG_BACKEND=${KEYSTONE_CATALOG_BACKEND:-sql}
KEYSTONE_CATALOG=$KEYSTONE_CONF_DIR/default_catalog.templates
@@ -112,6 +115,7 @@
sudo rm -f $KEYSTONE_WSGI_DIR/*.wsgi
disable_apache_site keystone
sudo rm -f $(apache_site_config_for keystone)
+ restart_apache_server
}
# _config_keystone_apache_wsgi() - Set WSGI config files of Keystone
@@ -189,6 +193,12 @@
iniset $KEYSTONE_CONF assignment driver "keystone.assignment.backends.$KEYSTONE_ASSIGNMENT_BACKEND.Assignment"
fi
+ # Configure rabbitmq credentials
+ if is_service_enabled rabbit; then
+ iniset $KEYSTONE_CONF DEFAULT rabbit_password $RABBIT_PASSWORD
+ iniset $KEYSTONE_CONF DEFAULT rabbit_host $RABBIT_HOST
+ fi
+
# Set the URL advertised in the ``versions`` structure returned by the '/' route
iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(public_port)s/"
iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:%(admin_port)s/"
@@ -265,11 +275,11 @@
fi
# Format logging
- if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && ! is_apache_enabled_service key ; then
+ if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$KEYSTONE_USE_MOD_WSGI" == "False" ] ; then
setup_colorized_logging $KEYSTONE_CONF DEFAULT
fi
- if is_apache_enabled_service key; then
+ if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
iniset $KEYSTONE_CONF DEFAULT debug "True"
# Eliminate the %(asctime)s.%(msecs)03d from the log format strings
iniset $KEYSTONE_CONF DEFAULT logging_context_format_string "%(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
@@ -278,6 +288,8 @@
iniset $KEYSTONE_CONF DEFAULT logging_exception_prefix "%(process)d TRACE %(name)s %(instance)s"
_config_keystone_apache_wsgi
fi
+
+ iniset $KEYSTONE_CONF DEFAULT max_token_size 16384
}
function configure_keystone_extensions {
@@ -316,79 +328,55 @@
function create_keystone_accounts {
# admin
- ADMIN_TENANT=$(openstack project create \
- admin \
- | grep " id " | get_field 2)
- ADMIN_USER=$(openstack user create \
- admin \
- --project "$ADMIN_TENANT" \
- --email admin@example.com \
- --password "$ADMIN_PASSWORD" \
- | grep " id " | get_field 2)
- ADMIN_ROLE=$(openstack role create \
- admin \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $ADMIN_TENANT \
- --user $ADMIN_USER
+ ADMIN_TENANT=$(get_or_create_project "admin")
+ ADMIN_USER=$(get_or_create_user "admin" \
+ "$ADMIN_PASSWORD" "$ADMIN_TENANT")
+ ADMIN_ROLE=$(get_or_create_role "admin")
+ get_or_add_user_role $ADMIN_ROLE $ADMIN_USER $ADMIN_TENANT
# Create service project/role
- openstack project create $SERVICE_TENANT_NAME
+ get_or_create_project "$SERVICE_TENANT_NAME"
# Service role, so service users do not have to be admins
- openstack role create service
+ get_or_create_role service
# The ResellerAdmin role is used by Nova and Ceilometer so we need to keep it.
# The admin role in swift allows a user to act as an admin for their tenant,
# but ResellerAdmin is needed for a user to act as any tenant. The name of this
# role is also configurable in swift-proxy.conf
- openstack role create ResellerAdmin
+ get_or_create_role ResellerAdmin
# The Member role is used by Horizon and Swift so we need to keep it:
- MEMBER_ROLE=$(openstack role create \
- Member \
- | grep " id " | get_field 2)
+ MEMBER_ROLE=$(get_or_create_role "Member")
+
# ANOTHER_ROLE demonstrates that an arbitrary role may be created and used
# TODO(sleepsonthefloor): show how this can be used for rbac in the future!
- ANOTHER_ROLE=$(openstack role create \
- anotherrole \
- | grep " id " | get_field 2)
+
+ ANOTHER_ROLE=$(get_or_create_role "anotherrole")
# invisible tenant - admin can't see this one
- INVIS_TENANT=$(openstack project create \
- invisible_to_admin \
- | grep " id " | get_field 2)
+ INVIS_TENANT=$(get_or_create_project "invisible_to_admin")
# demo
- DEMO_TENANT=$(openstack project create \
- demo \
- | grep " id " | get_field 2)
- DEMO_USER=$(openstack user create \
- demo \
- --project $DEMO_TENANT \
- --email demo@example.com \
- --password "$ADMIN_PASSWORD" \
- | grep " id " | get_field 2)
+ DEMO_TENANT=$(get_or_create_project "demo")
+ DEMO_USER=$(get_or_create_user "demo" \
+ "$ADMIN_PASSWORD" "$DEMO_TENANT" "demo@example.com")
- openstack role add --project $DEMO_TENANT --user $DEMO_USER $MEMBER_ROLE
- openstack role add --project $DEMO_TENANT --user $ADMIN_USER $ADMIN_ROLE
- openstack role add --project $DEMO_TENANT --user $DEMO_USER $ANOTHER_ROLE
- openstack role add --project $INVIS_TENANT --user $DEMO_USER $MEMBER_ROLE
+ get_or_add_user_role $MEMBER_ROLE $DEMO_USER $DEMO_TENANT
+ get_or_add_user_role $ADMIN_ROLE $ADMIN_USER $DEMO_TENANT
+ get_or_add_user_role $ANOTHER_ROLE $DEMO_USER $DEMO_TENANT
+ get_or_add_user_role $MEMBER_ROLE $DEMO_USER $INVIS_TENANT
# Keystone
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- KEYSTONE_SERVICE=$(openstack service create \
- keystone \
- --type identity \
- --description "Keystone Identity Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $KEYSTONE_SERVICE \
- --region RegionOne \
- --publicurl "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$IDENTITY_API_VERSION" \
- --adminurl "$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v$IDENTITY_API_VERSION" \
- --internalurl "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$IDENTITY_API_VERSION"
+
+ KEYSTONE_SERVICE=$(get_or_create_service "keystone" \
+ "identity" "Keystone Identity Service")
+ get_or_create_endpoint $KEYSTONE_SERVICE \
+ "$REGION_NAME" \
+ "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$IDENTITY_API_VERSION" \
+ "$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v$IDENTITY_API_VERSION" \
+ "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_SERVICE_PORT/v$IDENTITY_API_VERSION"
fi
}
@@ -464,7 +452,7 @@
fi
git_clone $KEYSTONE_REPO $KEYSTONE_DIR $KEYSTONE_BRANCH
setup_develop $KEYSTONE_DIR
- if is_apache_enabled_service key; then
+ if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
fi
}
@@ -477,7 +465,7 @@
service_port=$KEYSTONE_SERVICE_PORT_INT
fi
- if is_apache_enabled_service key; then
+ if [ "$KEYSTONE_USE_MOD_WSGI" == "True" ]; then
restart_apache_server
screen_it key "cd $KEYSTONE_DIR && sudo tail -f /var/log/$APACHE_NAME/keystone"
else
@@ -508,6 +496,9 @@
_cleanup_keystone_apache_wsgi
}
+function is_keystone_enabled {
+ return is_service_enabled key
+}
# Restore xtrace
$XTRACE
diff --git a/lib/marconi b/lib/marconi
index 20bf0e1..063ed3d 100644
--- a/lib/marconi
+++ b/lib/marconi
@@ -178,29 +178,19 @@
SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
- MARCONI_USER=$(openstack user create \
- marconi \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email marconi@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $MARCONI_USER
+ MARCONI_USER=$(get_or_create_user "marconi" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $MARCONI_USER $SERVICE_TENANT
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- MARCONI_SERVICE=$(openstack service create \
- marconi \
- --type=queuing \
- --description="Marconi Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $MARCONI_SERVICE \
- --region RegionOne \
- --publicurl "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT" \
- --adminurl "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT" \
- --internalurl "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT"
+
+ MARCONI_SERVICE=$(get_or_create_service "marconi" \
+ "queuing" "Marconi Service")
+ get_or_create_endpoint $MARCONI_SERVICE \
+ "$REGION_NAME" \
+ "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT" \
+ "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT" \
+ "$MARCONI_SERVICE_PROTOCOL://$MARCONI_SERVICE_HOST:$MARCONI_SERVICE_PORT"
fi
}
diff --git a/lib/neutron b/lib/neutron
index 2c6f53b..98636b4 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -85,6 +85,8 @@
NEUTRON_CONF=$NEUTRON_CONF_DIR/neutron.conf
export NEUTRON_TEST_CONFIG_FILE=${NEUTRON_TEST_CONFIG_FILE:-"$NEUTRON_CONF_DIR/debug.ini"}
+# Default name for Neutron database
+Q_DB_NAME=${Q_DB_NAME:-neutron}
# Default Neutron Plugin
Q_PLUGIN=${Q_PLUGIN:-ml2}
# Default Neutron Port
@@ -143,6 +145,17 @@
Q_RR_COMMAND="sudo $NEUTRON_ROOTWRAP $Q_RR_CONF_FILE"
fi
+
+# Distributed Virtual Router (DVR) configuration
+# Can be:
+# legacy - No DVR functionality
+# dvr_snat - Controller or single node DVR
+# dvr - Compute node in multi-node DVR
+Q_DVR_MODE=${Q_DVR_MODE:-legacy}
+if [[ "$Q_DVR_MODE" != "legacy" ]]; then
+ Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,l2population
+fi
+
# Provider Network Configurations
# --------------------------------
@@ -153,15 +166,15 @@
# remote connectivity), and no physical resources will be
# available for the allocation of provider networks.
-# To use GRE tunnels for tenant networks, set to True in
-# ``localrc``. GRE tunnels are only supported by the openvswitch
-# plugin, and currently only on Ubuntu.
-ENABLE_TENANT_TUNNELS=${ENABLE_TENANT_TUNNELS:-False}
+# To disable tunnels (GRE or VXLAN) for tenant networks,
+# set to False in ``local.conf``.
+# GRE tunnels are only supported by the openvswitch.
+ENABLE_TENANT_TUNNELS=${ENABLE_TENANT_TUNNELS:-True}
# If using GRE tunnels for tenant networks, specify the range of
# tunnel IDs from which tenant networks are allocated. Can be
# overriden in ``localrc`` in necesssary.
-TENANT_TUNNEL_RANGES=${TENANT_TUNNEL_RANGE:-1:1000}
+TENANT_TUNNEL_RANGES=${TENANT_TUNNEL_RANGES:-1:1000}
# To use VLANs for tenant networks, set to True in localrc. VLANs
# are supported by the openvswitch and linuxbridge plugins, each
@@ -203,6 +216,13 @@
# Example: ``LB_PHYSICAL_INTERFACE=eth1``
LB_PHYSICAL_INTERFACE=${LB_PHYSICAL_INTERFACE:-}
+# When Neutron tunnels are enabled it is needed to specify the
+# IP address of the end point in the local server. This IP is set
+# by default to the same IP address that the HOST IP.
+# This variable can be used to specify a different end point IP address
+# Example: ``TUNNEL_ENDPOINT_IP=1.1.1.1``
+TUNNEL_ENDPOINT_IP=${TUNNEL_ENDPOINT_IP:-$HOST_IP}
+
# With the openvswitch plugin, set to True in ``localrc`` to enable
# provider GRE tunnels when ``ENABLE_TENANT_TUNNELS`` is False.
#
@@ -297,6 +317,13 @@
_configure_neutron_metadata_agent
fi
+ if [[ "$Q_DVR_MODE" != "legacy" ]]; then
+ _configure_dvr
+ fi
+ if is_service_enabled ceilometer; then
+ _configure_neutron_ceilometer_notifications
+ fi
+
_configure_neutron_debug_command
}
@@ -307,7 +334,7 @@
iniset $NOVA_CONF neutron admin_auth_url "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:$KEYSTONE_AUTH_PORT/v2.0"
iniset $NOVA_CONF neutron auth_strategy "$Q_AUTH_STRATEGY"
iniset $NOVA_CONF neutron admin_tenant_name "$SERVICE_TENANT_NAME"
- iniset $NOVA_CONF neutron region_name "RegionOne"
+ iniset $NOVA_CONF neutron region_name "$REGION_NAME"
iniset $NOVA_CONF neutron url "http://$Q_HOST:$Q_PORT"
if [[ "$Q_USE_SECGROUP" == "True" ]]; then
@@ -350,28 +377,20 @@
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "q-svc" ]]; then
- NEUTRON_USER=$(openstack user create \
- neutron \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email neutron@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $NEUTRON_USER
+
+ NEUTRON_USER=$(get_or_create_user "neutron" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $NEUTRON_USER $SERVICE_TENANT
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- NEUTRON_SERVICE=$(openstack service create \
- neutron \
- --type=network \
- --description="Neutron Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $NEUTRON_SERVICE \
- --region RegionOne \
- --publicurl "http://$SERVICE_HOST:9696/" \
- --adminurl "http://$SERVICE_HOST:9696/" \
- --internalurl "http://$SERVICE_HOST:9696/"
+
+ NEUTRON_SERVICE=$(get_or_create_service "neutron" \
+ "network" "Neutron Service")
+ get_or_create_endpoint $NEUTRON_SERVICE \
+ "$REGION_NAME" \
+ "http://$SERVICE_HOST:$Q_PORT/" \
+ "http://$SERVICE_HOST:$Q_PORT/" \
+ "http://$SERVICE_HOST:$Q_PORT/"
fi
fi
}
@@ -572,7 +591,7 @@
fi
# delete all namespaces created by neutron
- for ns in $(sudo ip netns list | grep -o -E '(qdhcp|qrouter|qlbaas)-[0-9a-f-]*'); do
+ for ns in $(sudo ip netns list | grep -o -E '(qdhcp|qrouter|qlbaas|fip|snat)-[0-9a-f-]*'); do
sudo ip netns delete ${ns}
done
}
@@ -606,7 +625,7 @@
Q_PLUGIN_CONF_FILE=$Q_PLUGIN_CONF_PATH/$Q_PLUGIN_CONF_FILENAME
cp $NEUTRON_DIR/$Q_PLUGIN_CONF_FILE /$Q_PLUGIN_CONF_FILE
- iniset /$Q_PLUGIN_CONF_FILE database connection `database_connection_url $Q_DB_NAME`
+ iniset $NEUTRON_CONF database connection `database_connection_url $Q_DB_NAME`
iniset $NEUTRON_CONF DEFAULT state_path $DATA_DIR/neutron
# If addition config files are set, make sure their path name is set as well
@@ -668,14 +687,6 @@
iniset $Q_DHCP_CONF_FILE DEFAULT use_namespaces $Q_USE_NAMESPACE
iniset $Q_DHCP_CONF_FILE DEFAULT root_helper "$Q_RR_COMMAND"
- # Define extra "DEFAULT" configuration options when q-dhcp is configured by
- # defining the array ``Q_DHCP_EXTRA_DEFAULT_OPTS``.
- # For Example: ``Q_DHCP_EXTRA_DEFAULT_OPTS=(foo=true bar=2)``
- for I in "${Q_DHCP_EXTRA_DEFAULT_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- iniset $Q_DHCP_CONF_FILE DEFAULT ${I/=/ }
- done
-
_neutron_setup_interface_driver $Q_DHCP_CONF_FILE
neutron_plugin_configure_dhcp_agent
@@ -730,6 +741,10 @@
}
+function _configure_neutron_ceilometer_notifications {
+ iniset $NEUTRON_CONF DEFAULT notification_driver neutron.openstack.common.notifier.rpc_notifier
+}
+
function _configure_neutron_lbaas {
neutron_agent_lbaas_configure_common
neutron_agent_lbaas_configure_agent
@@ -750,6 +765,12 @@
neutron_vpn_configure_common
}
+function _configure_dvr {
+ iniset $NEUTRON_CONF DEFAULT router_distributed True
+ iniset $Q_L3_CONF_FILE DEFAULT agent_mode $Q_DVR_MODE
+}
+
+
# _configure_neutron_plugin_agent() - Set config files for neutron plugin agent
# It is called when q-agt is enabled.
function _configure_neutron_plugin_agent {
@@ -787,14 +808,6 @@
iniset $NEUTRON_CONF DEFAULT auth_strategy $Q_AUTH_STRATEGY
_neutron_setup_keystone $NEUTRON_CONF keystone_authtoken
- # Define extra "DEFAULT" configuration options when q-svc is configured by
- # defining the array ``Q_SRV_EXTRA_DEFAULT_OPTS``.
- # For Example: ``Q_SRV_EXTRA_DEFAULT_OPTS=(foo=true bar=2)``
- for I in "${Q_SRV_EXTRA_DEFAULT_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- iniset $NEUTRON_CONF DEFAULT ${I/=/ }
- done
-
# Configuration for neutron notifations to nova.
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_status_changes $Q_NOTIFY_NOVA_PORT_STATUS_CHANGES
iniset $NEUTRON_CONF DEFAULT notify_nova_on_port_data_changes $Q_NOTIFY_NOVA_PORT_DATA_CHANGES
diff --git a/lib/neutron_plugins/README.md b/lib/neutron_plugins/README.md
index be8fd96..7192a05 100644
--- a/lib/neutron_plugins/README.md
+++ b/lib/neutron_plugins/README.md
@@ -25,7 +25,7 @@
install_package bridge-utils
* ``neutron_plugin_configure_common`` :
set plugin-specific variables, ``Q_PLUGIN_CONF_PATH``, ``Q_PLUGIN_CONF_FILENAME``,
- ``Q_DB_NAME``, ``Q_PLUGIN_CLASS``
+ ``Q_PLUGIN_CLASS``
* ``neutron_plugin_configure_debug_command``
* ``neutron_plugin_configure_dhcp_agent``
* ``neutron_plugin_configure_l3_agent``
diff --git a/lib/neutron_plugins/bigswitch_floodlight b/lib/neutron_plugins/bigswitch_floodlight
index efdd9ef..9e84f2e 100644
--- a/lib/neutron_plugins/bigswitch_floodlight
+++ b/lib/neutron_plugins/bigswitch_floodlight
@@ -19,7 +19,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/bigswitch
Q_PLUGIN_CONF_FILENAME=restproxy.ini
- Q_DB_NAME="restproxy_neutron"
Q_PLUGIN_CLASS="neutron.plugins.bigswitch.plugin.NeutronRestProxyV2"
BS_FL_CONTROLLERS_PORT=${BS_FL_CONTROLLERS_PORT:-localhost:80}
BS_FL_CONTROLLER_TIMEOUT=${BS_FL_CONTROLLER_TIMEOUT:-10}
diff --git a/lib/neutron_plugins/brocade b/lib/neutron_plugins/brocade
index e4cc754..511fb71 100644
--- a/lib/neutron_plugins/brocade
+++ b/lib/neutron_plugins/brocade
@@ -20,7 +20,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/brocade
Q_PLUGIN_CONF_FILENAME=brocade.ini
- Q_DB_NAME="brcd_neutron"
Q_PLUGIN_CLASS="neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2"
}
diff --git a/lib/neutron_plugins/cisco b/lib/neutron_plugins/cisco
index dccf400..da90ee3 100644
--- a/lib/neutron_plugins/cisco
+++ b/lib/neutron_plugins/cisco
@@ -197,7 +197,6 @@
Q_PLUGIN_CONF_FILENAME=cisco_plugins.ini
fi
Q_PLUGIN_CLASS="neutron.plugins.cisco.network_plugin.PluginV2"
- Q_DB_NAME=cisco_neutron
}
function neutron_plugin_configure_debug_command {
diff --git a/lib/neutron_plugins/embrane b/lib/neutron_plugins/embrane
index cce108a..7dafdc0 100644
--- a/lib/neutron_plugins/embrane
+++ b/lib/neutron_plugins/embrane
@@ -18,7 +18,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/embrane
Q_PLUGIN_CONF_FILENAME=heleos_conf.ini
- Q_DB_NAME="ovs_neutron"
Q_PLUGIN_CLASS="neutron.plugins.embrane.plugins.embrane_ovs_plugin.EmbraneOvsPlugin"
}
diff --git a/lib/neutron_plugins/ibm b/lib/neutron_plugins/ibm
index 3aef9d0..39b0040 100644
--- a/lib/neutron_plugins/ibm
+++ b/lib/neutron_plugins/ibm
@@ -60,7 +60,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ibm
Q_PLUGIN_CONF_FILENAME=sdnve_neutron_plugin.ini
- Q_DB_NAME="sdnve_neutron"
Q_PLUGIN_CLASS="neutron.plugins.ibm.sdnve_neutron_plugin.SdnvePluginV2"
}
diff --git a/lib/neutron_plugins/linuxbridge b/lib/neutron_plugins/linuxbridge
index 113a7df..5f989ae 100644
--- a/lib/neutron_plugins/linuxbridge
+++ b/lib/neutron_plugins/linuxbridge
@@ -10,7 +10,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/linuxbridge
Q_PLUGIN_CONF_FILENAME=linuxbridge_conf.ini
- Q_DB_NAME="neutron_linux_bridge"
Q_PLUGIN_CLASS="neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2"
}
diff --git a/lib/neutron_plugins/midonet b/lib/neutron_plugins/midonet
index c5373d6..6ccd502 100644
--- a/lib/neutron_plugins/midonet
+++ b/lib/neutron_plugins/midonet
@@ -26,7 +26,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/midonet
Q_PLUGIN_CONF_FILENAME=midonet.ini
- Q_DB_NAME="neutron_midonet"
Q_PLUGIN_CLASS="neutron.plugins.midonet.plugin.MidonetPluginV2"
}
diff --git a/lib/neutron_plugins/ml2 b/lib/neutron_plugins/ml2
index 9966373..d270015 100644
--- a/lib/neutron_plugins/ml2
+++ b/lib/neutron_plugins/ml2
@@ -7,9 +7,9 @@
# Enable this to simply and quickly enable tunneling with ML2.
# Select either 'gre', 'vxlan', or '(gre vxlan)'
-Q_ML2_TENANT_NETWORK_TYPE=${Q_ML2_TENANT_NETWORK_TYPE:-}
+Q_ML2_TENANT_NETWORK_TYPE=${Q_ML2_TENANT_NETWORK_TYPE:-"vxlan"}
# This has to be set here since the agent will set this in the config file
-if [[ "$Q_ML2_TENANT_NETWORK_TYPE" != "" ]]; then
+if [[ "$Q_ML2_TENANT_NETWORK_TYPE" != "local" ]]; then
Q_AGENT_EXTRA_AGENT_OPTS+=(tunnel_types=$Q_ML2_TENANT_NETWORK_TYPE)
elif [[ "$ENABLE_TENANT_TUNNELS" == "True" ]]; then
Q_AGENT_EXTRA_AGENT_OPTS+=(tunnel_types=gre)
@@ -50,7 +50,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ml2
Q_PLUGIN_CONF_FILENAME=ml2_conf.ini
- Q_DB_NAME="neutron_ml2"
Q_PLUGIN_CLASS="neutron.plugins.ml2.plugin.Ml2Plugin"
# The ML2 plugin delegates L3 routing/NAT functionality to
# the L3 service plugin which must therefore be specified.
@@ -58,7 +57,7 @@
}
function neutron_plugin_configure_service {
- if [[ "$Q_ML2_TENANT_NETWORK_TYPE" != "" ]]; then
+ if [[ "$Q_ML2_TENANT_NETWORK_TYPE" != "local" ]]; then
Q_SRV_EXTRA_OPTS+=(tenant_network_types=$Q_ML2_TENANT_NETWORK_TYPE)
elif [[ "$ENABLE_TENANT_TUNNELS" == "True" ]]; then
# This assumes you want a simple configuration, and will overwrite
@@ -99,7 +98,7 @@
fi
# Since we enable the tunnel TypeDrivers, also enable a local_ip
- iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $HOST_IP
+ iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $TUNNEL_ENDPOINT_IP
populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2 mechanism_drivers=$Q_ML2_PLUGIN_MECHANISM_DRIVERS
@@ -112,6 +111,12 @@
populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_vxlan $Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS
populate_ml2_config /$Q_PLUGIN_CONF_FILE ml2_type_vlan $Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS
+
+ if [[ "$Q_DVR_MODE" != "legacy" ]]; then
+ populate_ml2_config /$Q_PLUGIN_CONF_FILE agent l2_population=True
+ populate_ml2_config /$Q_PLUGIN_CONF_FILE agent tunnel_types=vxlan
+ populate_ml2_config /$Q_PLUGIN_CONF_FILE agent enable_distributed_routing=True
+ fi
}
function has_neutron_plugin_security_group {
diff --git a/lib/neutron_plugins/nec b/lib/neutron_plugins/nec
index d76f7d4..f8d98c3 100644
--- a/lib/neutron_plugins/nec
+++ b/lib/neutron_plugins/nec
@@ -39,7 +39,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/nec
Q_PLUGIN_CONF_FILENAME=nec.ini
- Q_DB_NAME="neutron_nec"
Q_PLUGIN_CLASS="neutron.plugins.nec.nec_plugin.NECPluginV2"
}
diff --git a/lib/neutron_plugins/nuage b/lib/neutron_plugins/nuage
index 86f09d2..52d85a2 100644
--- a/lib/neutron_plugins/nuage
+++ b/lib/neutron_plugins/nuage
@@ -20,7 +20,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/nuage
Q_PLUGIN_CONF_FILENAME=nuage_plugin.ini
- Q_DB_NAME="nuage_neutron"
Q_PLUGIN_CLASS="neutron.plugins.nuage.plugin.NuagePlugin"
Q_PLUGIN_EXTENSIONS_PATH=neutron/plugins/nuage/extensions
#Nuage specific Neutron defaults. Actual value must be set and sourced
diff --git a/lib/neutron_plugins/ofagent_agent b/lib/neutron_plugins/ofagent_agent
index b8321f3..66283ad 100644
--- a/lib/neutron_plugins/ofagent_agent
+++ b/lib/neutron_plugins/ofagent_agent
@@ -54,7 +54,7 @@
die $LINENO "You are running OVS version $OVS_VERSION. OVS 1.4+ is required for tunneling between multiple hosts."
fi
iniset /$Q_PLUGIN_CONF_FILE ovs enable_tunneling True
- iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $HOST_IP
+ iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $TUNNEL_ENDPOINT_IP
fi
# Setup physical network bridge mappings. Override
diff --git a/lib/neutron_plugins/oneconvergence b/lib/neutron_plugins/oneconvergence
index 06f1eee..e5f0d71 100644
--- a/lib/neutron_plugins/oneconvergence
+++ b/lib/neutron_plugins/oneconvergence
@@ -19,7 +19,6 @@
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/oneconvergence
Q_PLUGIN_CONF_FILENAME=nvsdplugin.ini
Q_PLUGIN_CLASS="neutron.plugins.oneconvergence.plugin.OneConvergencePluginV2"
- Q_DB_NAME='oc_nvsd_neutron'
}
# Configure plugin specific information
diff --git a/lib/neutron_plugins/openvswitch b/lib/neutron_plugins/openvswitch
index fc81092..c468132 100644
--- a/lib/neutron_plugins/openvswitch
+++ b/lib/neutron_plugins/openvswitch
@@ -10,7 +10,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/openvswitch
Q_PLUGIN_CONF_FILENAME=ovs_neutron_plugin.ini
- Q_DB_NAME="ovs_neutron"
Q_PLUGIN_CLASS="neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2"
}
diff --git a/lib/neutron_plugins/openvswitch_agent b/lib/neutron_plugins/openvswitch_agent
index fbc013f..5adb0c5 100644
--- a/lib/neutron_plugins/openvswitch_agent
+++ b/lib/neutron_plugins/openvswitch_agent
@@ -48,7 +48,7 @@
die $LINENO "You are running OVS version $OVS_VERSION. OVS 1.4+ is required for tunneling between multiple hosts."
fi
iniset /$Q_PLUGIN_CONF_FILE ovs enable_tunneling True
- iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $HOST_IP
+ iniset /$Q_PLUGIN_CONF_FILE ovs local_ip $TUNNEL_ENDPOINT_IP
fi
# Setup physical network bridge mappings. Override
diff --git a/lib/neutron_plugins/ovs_base b/lib/neutron_plugins/ovs_base
index 26c5489..616a236 100644
--- a/lib/neutron_plugins/ovs_base
+++ b/lib/neutron_plugins/ovs_base
@@ -7,6 +7,7 @@
OVS_BRIDGE=${OVS_BRIDGE:-br-int}
PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-br-ex}
+OVS_DATAPATH_TYPE=${OVS_DATAPATH_TYPE:-""}
function is_neutron_ovs_base_plugin {
# Yes, we use OVS.
@@ -17,6 +18,9 @@
local bridge=$1
neutron-ovs-cleanup
sudo ovs-vsctl --no-wait -- --may-exist add-br $bridge
+ if [[ $OVS_DATAPATH_TYPE != "" ]]; then
+ sudo ovs-vsctl set Bridge $bridge datapath_type=${OVS_DATAPATH_TYPE}
+ fi
sudo ovs-vsctl --no-wait br-set-external-id $bridge bridge-id $bridge
}
diff --git a/lib/neutron_plugins/plumgrid b/lib/neutron_plugins/plumgrid
index 178bca7..37b9e4c 100644
--- a/lib/neutron_plugins/plumgrid
+++ b/lib/neutron_plugins/plumgrid
@@ -17,7 +17,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/plumgrid
Q_PLUGIN_CONF_FILENAME=plumgrid.ini
- Q_DB_NAME="plumgrid_neutron"
Q_PLUGIN_CLASS="neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2"
PLUMGRID_DIRECTOR_IP=${PLUMGRID_DIRECTOR_IP:-localhost}
PLUMGRID_DIRECTOR_PORT=${PLUMGRID_DIRECTOR_PORT:-7766}
diff --git a/lib/neutron_plugins/ryu b/lib/neutron_plugins/ryu
index ceb89fa..f45a797 100644
--- a/lib/neutron_plugins/ryu
+++ b/lib/neutron_plugins/ryu
@@ -25,7 +25,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/ryu
Q_PLUGIN_CONF_FILENAME=ryu.ini
- Q_DB_NAME="ovs_neutron"
Q_PLUGIN_CLASS="neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2"
}
diff --git a/lib/neutron_plugins/vmware_nsx b/lib/neutron_plugins/vmware_nsx
index c7672db..5802ebf 100644
--- a/lib/neutron_plugins/vmware_nsx
+++ b/lib/neutron_plugins/vmware_nsx
@@ -40,7 +40,6 @@
function neutron_plugin_configure_common {
Q_PLUGIN_CONF_PATH=etc/neutron/plugins/vmware
Q_PLUGIN_CONF_FILENAME=nsx.ini
- Q_DB_NAME="neutron_nsx"
Q_PLUGIN_CLASS="neutron.plugins.vmware.plugin.NsxPlugin"
}
diff --git a/lib/nova b/lib/nova
index 3d31d68..6b1afd9 100644
--- a/lib/nova
+++ b/lib/nova
@@ -173,14 +173,15 @@
clean_iptables
# Destroy old instances
- instances=`sudo virsh list --all | grep $INSTANCE_NAME_PREFIX | sed "s/.*\($INSTANCE_NAME_PREFIX[0-9a-fA-F]*\).*/\1/g"`
+ local instances=`sudo virsh list --all | grep $INSTANCE_NAME_PREFIX | sed "s/.*\($INSTANCE_NAME_PREFIX[0-9a-fA-F]*\).*/\1/g"`
if [ ! "$instances" = "" ]; then
echo $instances | xargs -n1 sudo virsh destroy || true
echo $instances | xargs -n1 sudo virsh undefine --managed-save || true
fi
# Logout and delete iscsi sessions
- tgts=$(sudo iscsiadm --mode node | grep $VOLUME_NAME_PREFIX | cut -d ' ' -f2)
+ local tgts=$(sudo iscsiadm --mode node | grep $VOLUME_NAME_PREFIX | cut -d ' ' -f2)
+ local target
for target in $tgts; do
sudo iscsiadm --mode node -T $target --logout || true
done
@@ -218,14 +219,14 @@
sudo chown root:root $NOVA_CONF_DIR/rootwrap.conf
sudo chmod 0644 $NOVA_CONF_DIR/rootwrap.conf
# Specify rootwrap.conf as first parameter to nova-rootwrap
- ROOTWRAP_SUDOER_CMD="$NOVA_ROOTWRAP $NOVA_CONF_DIR/rootwrap.conf *"
+ local rootwrap_sudoer_cmd="$NOVA_ROOTWRAP $NOVA_CONF_DIR/rootwrap.conf *"
# Set up the rootwrap sudoers for nova
- TEMPFILE=`mktemp`
- echo "$STACK_USER ALL=(root) NOPASSWD: $ROOTWRAP_SUDOER_CMD" >$TEMPFILE
- chmod 0440 $TEMPFILE
- sudo chown root:root $TEMPFILE
- sudo mv $TEMPFILE /etc/sudoers.d/nova-rootwrap
+ local tempfile=`mktemp`
+ echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_sudoer_cmd" >$tempfile
+ chmod 0440 $tempfile
+ sudo chown root:root $tempfile
+ sudo mv $tempfile /etc/sudoers.d/nova-rootwrap
}
# configure_nova() - Set config files, create data dirs, etc
@@ -274,7 +275,7 @@
if [[ "$LIBVIRT_TYPE" == "lxc" ]]; then
if is_ubuntu; then
if [[ ! "$DISTRO" > natty ]]; then
- cgline="none /cgroup cgroup cpuacct,memory,devices,cpu,freezer,blkio 0 0"
+ local cgline="none /cgroup cgroup cpuacct,memory,devices,cpu,freezer,blkio 0 0"
sudo mkdir -p /cgroup
if ! grep -q cgroup /etc/fstab; then
echo "$cgline" | sudo tee -a /etc/fstab
@@ -328,44 +329,33 @@
# Migrated from keystone_data.sh
create_nova_accounts() {
- SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
- ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
+ local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
+ local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
# Nova
if [[ "$ENABLED_SERVICES" =~ "n-api" ]]; then
- NOVA_USER=$(openstack user create \
- nova \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email nova@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $NOVA_USER
+
+ local nova_user=$(get_or_create_user "nova" \
+ "$SERVICE_PASSWORD" $service_tenant)
+ get_or_add_user_role $admin_role $nova_user $service_tenant
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- NOVA_SERVICE=$(openstack service create \
- nova \
- --type=compute \
- --description="Nova Compute Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $NOVA_SERVICE \
- --region RegionOne \
- --publicurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s" \
- --adminurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s" \
- --internalurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s"
- NOVA_V3_SERVICE=$(openstack service create \
- novav3 \
- --type=computev3 \
- --description="Nova Compute Service V3" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $NOVA_V3_SERVICE \
- --region RegionOne \
- --publicurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3" \
- --adminurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3" \
- --internalurl "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3"
+
+ local nova_service=$(get_or_create_service "nova" \
+ "compute" "Nova Compute Service")
+ get_or_create_endpoint $nova_service \
+ "$REGION_NAME" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2/\$(tenant_id)s"
+
+ local nova_v3_service=$(get_or_create_service "novav3" \
+ "computev3" "Nova Compute Service V3")
+ get_or_create_endpoint $nova_v3_service \
+ "$REGION_NAME" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3" \
+ "$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v3"
fi
fi
@@ -374,40 +364,32 @@
if is_service_enabled swift; then
# Nova needs ResellerAdmin role to download images when accessing
# swift through the s3 api.
- openstack role add \
- --project $SERVICE_TENANT_NAME \
- --user nova \
- ResellerAdmin
+ get_or_add_user_role ResellerAdmin nova $SERVICE_TENANT_NAME
fi
# EC2
if [[ "$KEYSTONE_CATALOG_BACKEND" = "sql" ]]; then
- openstack service create \
- --type ec2 \
- --description "EC2 Compatibility Layer" \
- ec2
- openstack endpoint create \
- --region RegionOne \
- --publicurl "http://$SERVICE_HOST:8773/services/Cloud" \
- --adminurl "http://$SERVICE_HOST:8773/services/Admin" \
- --internalurl "http://$SERVICE_HOST:8773/services/Cloud" \
- ec2
+
+ local ec2_service=$(get_or_create_service "ec2" \
+ "ec2" "EC2 Compatibility Layer")
+ get_or_create_endpoint $ec2_service \
+ "$REGION_NAME" \
+ "http://$SERVICE_HOST:8773/services/Cloud" \
+ "http://$SERVICE_HOST:8773/services/Admin" \
+ "http://$SERVICE_HOST:8773/services/Cloud"
fi
fi
# S3
if is_service_enabled n-obj swift3; then
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- openstack service create \
- --type s3 \
- --description "S3" \
- s3
- openstack endpoint create \
- --region RegionOne \
- --publicurl "http://$SERVICE_HOST:$S3_SERVICE_PORT" \
- --adminurl "http://$SERVICE_HOST:$S3_SERVICE_PORT" \
- --internalurl "http://$SERVICE_HOST:$S3_SERVICE_PORT" \
- s3
+
+ local s3_service=$(get_or_create_service "s3" "s3" "S3")
+ get_or_create_endpoint $s3_service \
+ "$REGION_NAME" \
+ "http://$SERVICE_HOST:$S3_SERVICE_PORT" \
+ "http://$SERVICE_HOST:$S3_SERVICE_PORT" \
+ "http://$SERVICE_HOST:$S3_SERVICE_PORT"
fi
fi
}
@@ -499,18 +481,6 @@
iniset $NOVA_CONF DEFAULT notification_driver "messaging"
fi
- # Provide some transition from ``EXTRA_FLAGS`` to ``EXTRA_OPTS``
- if [[ -z "$EXTRA_OPTS" && -n "$EXTRA_FLAGS" ]]; then
- EXTRA_OPTS=$EXTRA_FLAGS
- fi
-
- # Define extra nova conf flags by defining the array ``EXTRA_OPTS``.
- # For Example: ``EXTRA_OPTS=(foo=true bar=2)``
- for I in "${EXTRA_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- iniset $NOVA_CONF DEFAULT ${I/=/ }
- done
-
# All nova-compute workers need to know the vnc configuration options
# These settings don't hurt anything if n-xvnc and n-novnc are disabled
if is_service_enabled n-cpu; then
@@ -706,6 +676,7 @@
# Use 'sg' to execute nova-compute as a member of the **$LIBVIRT_GROUP** group.
screen_it n-cpu "cd $NOVA_DIR && sg $LIBVIRT_GROUP '$NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf'"
elif [[ "$VIRT_DRIVER" = 'fake' ]]; then
+ local i
for i in `seq 1 $NUMBER_FAKE_NOVA_COMPUTE`; do
screen_it n-cpu "cd $NOVA_DIR && $NOVA_BIN_DIR/nova-compute --config-file $compute_cell_conf --config-file <(echo -e '[DEFAULT]\nhost=${HOSTNAME}${i}')"
done
diff --git a/lib/nova_plugins/functions-libvirt b/lib/nova_plugins/functions-libvirt
index 18bdf89..6fb5c38 100644
--- a/lib/nova_plugins/functions-libvirt
+++ b/lib/nova_plugins/functions-libvirt
@@ -112,7 +112,15 @@
# Enable server side traces for libvirtd
if [[ "$DEBUG_LIBVIRT" = "True" ]] ; then
- local log_filters="1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 1:util"
+ if is_ubuntu; then
+ # Unexpectedly binary package builds in ubuntu get fully qualified
+ # source file paths, not relative paths. This screws with the matching
+ # of '1:libvirt' making everything turn on. So use libvirt.c for now.
+ # This will have to be re-visited when Ubuntu ships libvirt >= 1.2.3
+ local log_filters="1:libvirt.c 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util"
+ else
+ local log_filters="1:libvirt 1:qemu 1:conf 1:security 3:object 3:event 3:json 3:file 1:util"
+ fi
local log_outputs="1:file:/var/log/libvirt/libvirtd.log"
if ! grep -q "log_filters=\"$log_filters\"" /etc/libvirt/libvirtd.conf; then
echo "log_filters=\"$log_filters\"" | sudo tee -a /etc/libvirt/libvirtd.conf
diff --git a/lib/nova_plugins/hypervisor-baremetal b/lib/nova_plugins/hypervisor-baremetal
index 1d4d414..22d16a6 100644
--- a/lib/nova_plugins/hypervisor-baremetal
+++ b/lib/nova_plugins/hypervisor-baremetal
@@ -58,12 +58,6 @@
sudo cp "$FILES/dnsmasq-for-baremetal-from-nova-network.conf" "$BM_DNSMASQ_CONF"
iniset $NOVA_CONF DEFAULT dnsmasq_config_file "$BM_DNSMASQ_CONF"
fi
-
- # Define extra baremetal nova conf flags by defining the array ``EXTRA_BAREMETAL_OPTS``.
- for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
- # Attempt to convert flags to options
- iniset $NOVA_CONF baremetal ${I/=/ }
- done
}
# install_nova_hypervisor() - Install external components
diff --git a/lib/oslo b/lib/oslo
index 8d9feb9..025815c 100644
--- a/lib/oslo
+++ b/lib/oslo
@@ -23,6 +23,7 @@
CLIFF_DIR=$DEST/cliff
OSLOCFG_DIR=$DEST/oslo.config
OSLODB_DIR=$DEST/oslo.db
+OSLOI18N_DIR=$DEST/oslo.i18n
OSLOMSG_DIR=$DEST/oslo.messaging
OSLORWRAP_DIR=$DEST/oslo.rootwrap
OSLOVMWARE_DIR=$DEST/oslo.vmware
@@ -41,6 +42,9 @@
git_clone $CLIFF_REPO $CLIFF_DIR $CLIFF_BRANCH
setup_install $CLIFF_DIR
+ git_clone $OSLOI18N_REPO $OSLOI18N_DIR $OSLOI18N_BRANCH
+ setup_install $OSLOI18N_DIR
+
git_clone $OSLOCFG_REPO $OSLOCFG_DIR $OSLOCFG_BRANCH
setup_install $OSLOCFG_DIR
diff --git a/lib/rpc_backend b/lib/rpc_backend
index e922daa..38da50c 100644
--- a/lib/rpc_backend
+++ b/lib/rpc_backend
@@ -26,6 +26,8 @@
# Make sure we only have one rpc backend enabled.
# Also check the specified rpc backend is available on your platform.
function check_rpc_backend {
+ local c svc
+
local rpc_needed=1
# We rely on the fact that filenames in lib/* match the service names
# that can be passed as arguments to is_service_enabled.
@@ -94,11 +96,7 @@
function install_rpc_backend {
if is_service_enabled rabbit; then
# Install rabbitmq-server
- # the temp file is necessary due to LP: #878600
- tfile=$(mktemp)
- install_package rabbitmq-server > "$tfile" 2>&1
- cat "$tfile"
- rm -f "$tfile"
+ install_package rabbitmq-server
elif is_service_enabled qpid; then
if is_fedora; then
install_package qpid-cpp-server
@@ -142,6 +140,7 @@
# NOTE(bnemec): Retry initial rabbitmq configuration to deal with
# the fact that sometimes it fails to start properly.
# Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1059028
+ local i
for i in `seq 10`; do
if is_fedora || is_suse; then
# service is not started by default
diff --git a/lib/sahara b/lib/sahara
index 934989b..70feacd 100644
--- a/lib/sahara
+++ b/lib/sahara
@@ -60,29 +60,19 @@
SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
- SAHARA_USER=$(openstack user create \
- sahara \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email sahara@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $SAHARA_USER
+ SAHARA_USER=$(get_or_create_user "sahara" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $SAHARA_USER $SERVICE_TENANT
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- SAHARA_SERVICE=$(openstack service create \
- sahara \
- --type=data_processing \
- --description="Sahara Data Processing" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $SAHARA_SERVICE \
- --region RegionOne \
- --publicurl "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- --adminurl "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
- --internalurl "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s"
+
+ SAHARA_SERVICE=$(get_or_create_service "sahara" \
+ "data_processing" "Sahara Data Processing")
+ get_or_create_endpoint $SAHARA_SERVICE \
+ "$REGION_NAME" \
+ "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
+ "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s" \
+ "$SAHARA_SERVICE_PROTOCOL://$SAHARA_SERVICE_HOST:$SAHARA_SERVICE_PORT/v1.1/\$(tenant_id)s"
fi
}
diff --git a/lib/swift b/lib/swift
index c47b09f..84304d3 100644
--- a/lib/swift
+++ b/lib/swift
@@ -118,6 +118,8 @@
# Tell Tempest this project is present
TEMPEST_SERVICES+=,swift
+# Toggle for deploying Keystone under HTTPD + mod_wsgi
+SWIFT_USE_MOD_WSGI=${SWIFT_USE_MOD_WSGI:-False}
# Functions
# ---------
@@ -139,7 +141,7 @@
rm ${SWIFT_DISK_IMAGE}
fi
rm -rf ${SWIFT_DATA_DIR}/run/
- if is_apache_enabled_service swift; then
+ if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
_cleanup_swift_apache_wsgi
fi
}
@@ -337,12 +339,12 @@
iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server node_timeout 120
iniset ${SWIFT_CONFIG_PROXY_SERVER} app:proxy-server conn_timeout 20
- # Skipped due to bug 1294789
- ## Configure Ceilometer
- #if is_service_enabled ceilometer; then
- # iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer use "egg:ceilometer#swift"
- # SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
- #fi
+ # Configure Ceilometer
+ if is_service_enabled ceilometer; then
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer "set log_level" "WARN"
+ iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:ceilometer use "egg:ceilometer#swift"
+ SWIFT_EXTRAS_MIDDLEWARE_LAST="${SWIFT_EXTRAS_MIDDLEWARE_LAST} ceilometer"
+ fi
# Restrict the length of auth tokens in the swift proxy-server logs.
iniset ${SWIFT_CONFIG_PROXY_SERVER} filter:proxy-logging reveal_sensitive_prefix ${SWIFT_LOG_TOKEN_LENGTH}
@@ -354,7 +356,12 @@
if is_service_enabled swift3;then
swift_pipeline+=" swift3 s3token "
fi
- swift_pipeline+=" authtoken keystoneauth tempauth "
+
+ if is_service_enabled key;then
+ swift_pipeline+=" authtoken keystoneauth"
+ fi
+ swift_pipeline+=" tempauth "
+
sed -i "/^pipeline/ { s/tempauth/${swift_pipeline} ${SWIFT_EXTRAS_MIDDLEWARE}/ ;}" ${SWIFT_CONFIG_PROXY_SERVER}
sed -i "/^pipeline/ { s/proxy-server/${SWIFT_EXTRAS_MIDDLEWARE_LAST} proxy-server/ ; }" ${SWIFT_CONFIG_PROXY_SERVER}
@@ -465,7 +472,7 @@
sudo killall -HUP rsyslogd
fi
- if is_apache_enabled_service swift; then
+ if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
_config_swift_apache_wsgi
fi
}
@@ -542,50 +549,40 @@
SERVICE_TENANT=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
- SWIFT_USER=$(openstack user create \
- swift \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email=swift@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $ADMIN_ROLE \
- --project $SERVICE_TENANT \
- --user $SWIFT_USER
+ SWIFT_USER=$(get_or_create_user "swift" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $ADMIN_ROLE $SWIFT_USER $SERVICE_TENANT
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- SWIFT_SERVICE=$(openstack service create \
- swift \
- --type="object-store" \
- --description="Swift Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $SWIFT_SERVICE \
- --region RegionOne \
- --publicurl "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s" \
- --adminurl "http://$SERVICE_HOST:8080" \
- --internalurl "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s"
+
+ SWIFT_SERVICE=$(get_or_create_service "swift" \
+ "object-store" "Swift Service")
+ get_or_create_endpoint $SWIFT_SERVICE \
+ "$REGION_NAME" \
+ "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s" \
+ "http://$SERVICE_HOST:8080" \
+ "http://$SERVICE_HOST:8080/v1/AUTH_\$(tenant_id)s"
fi
- SWIFT_TENANT_TEST1=$(openstack project create swifttenanttest1 | grep " id " | get_field 2)
+ SWIFT_TENANT_TEST1=$(get_or_create_project swifttenanttest1)
die_if_not_set $LINENO SWIFT_TENANT_TEST1 "Failure creating SWIFT_TENANT_TEST1"
- SWIFT_USER_TEST1=$(openstack user create swiftusertest1 --password=$SWIFTUSERTEST1_PASSWORD \
- --project "$SWIFT_TENANT_TEST1" --email=test@example.com | grep " id " | get_field 2)
+ SWIFT_USER_TEST1=$(get_or_create_user swiftusertest1 $SWIFTUSERTEST1_PASSWORD \
+ "$SWIFT_TENANT_TEST1" "test@example.com")
die_if_not_set $LINENO SWIFT_USER_TEST1 "Failure creating SWIFT_USER_TEST1"
- openstack role add --user $SWIFT_USER_TEST1 --project $SWIFT_TENANT_TEST1 $ADMIN_ROLE
+ get_or_add_user_role $ADMIN_ROLE $SWIFT_USER_TEST1 $SWIFT_TENANT_TEST1
- SWIFT_USER_TEST3=$(openstack user create swiftusertest3 --password=$SWIFTUSERTEST3_PASSWORD \
- --project "$SWIFT_TENANT_TEST1" --email=test3@example.com | grep " id " | get_field 2)
+ SWIFT_USER_TEST3=$(get_or_create_user swiftusertest3 $SWIFTUSERTEST3_PASSWORD \
+ "$SWIFT_TENANT_TEST1" "test3@example.com")
die_if_not_set $LINENO SWIFT_USER_TEST3 "Failure creating SWIFT_USER_TEST3"
- openstack role add --user $SWIFT_USER_TEST3 --project $SWIFT_TENANT_TEST1 $ANOTHER_ROLE
+ get_or_add_user_role $ANOTHER_ROLE $SWIFT_USER_TEST3 $SWIFT_TENANT_TEST1
- SWIFT_TENANT_TEST2=$(openstack project create swifttenanttest2 | grep " id " | get_field 2)
+ SWIFT_TENANT_TEST2=$(get_or_create_project swifttenanttest2)
die_if_not_set $LINENO SWIFT_TENANT_TEST2 "Failure creating SWIFT_TENANT_TEST2"
- SWIFT_USER_TEST2=$(openstack user create swiftusertest2 --password=$SWIFTUSERTEST2_PASSWORD \
- --project "$SWIFT_TENANT_TEST2" --email=test2@example.com | grep " id " | get_field 2)
+ SWIFT_USER_TEST2=$(get_or_create_user swiftusertest2 $SWIFTUSERTEST2_PASSWORD \
+ "$SWIFT_TENANT_TEST2" "test2@example.com")
die_if_not_set $LINENO SWIFT_USER_TEST2 "Failure creating SWIFT_USER_TEST2"
- openstack role add --user $SWIFT_USER_TEST2 --project $SWIFT_TENANT_TEST2 $ADMIN_ROLE
+ get_or_add_user_role $ADMIN_ROLE $SWIFT_USER_TEST2 $SWIFT_TENANT_TEST2
}
# init_swift() - Initialize rings
@@ -626,7 +623,7 @@
function install_swift {
git_clone $SWIFT_REPO $SWIFT_DIR $SWIFT_BRANCH
setup_develop $SWIFT_DIR
- if is_apache_enabled_service swift; then
+ if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
fi
}
@@ -650,7 +647,7 @@
start_service rsyncd
fi
- if is_apache_enabled_service swift; then
+ if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
restart_apache_server
swift-init --run-dir=${SWIFT_DATA_DIR}/run rest start
screen_it s-proxy "cd $SWIFT_DIR && sudo tail -f /var/log/$APACHE_NAME/proxy-server"
@@ -687,7 +684,7 @@
# stop_swift() - Stop running processes (non-screen)
function stop_swift {
- if is_apache_enabled_service swift; then
+ if [ "$SWIFT_USE_MOD_WSGI" == "True" ]; then
swift-init --run-dir=${SWIFT_DATA_DIR}/run rest stop && return 0
fi
diff --git a/lib/tempest b/lib/tempest
index 1e98bec..91ede0d 100644
--- a/lib/tempest
+++ b/lib/tempest
@@ -307,9 +307,9 @@
iniset $TEMPEST_CONFIG boto ec2_url "http://$SERVICE_HOST:8773/services/Cloud"
iniset $TEMPEST_CONFIG boto s3_url "http://$SERVICE_HOST:${S3_SERVICE_PORT:-3333}"
iniset $TEMPEST_CONFIG boto s3_materials_path "$BOTO_MATERIALS_PATH"
- iniset $TEMPEST_CONFIG boto ari_manifest cirros-${CIRROS_VERSION}-x86_64-initrd.manifest.xml
- iniset $TEMPEST_CONFIG boto ami_manifest cirros-${CIRROS_VERSION}-x86_64-blank.img.manifest.xml
- iniset $TEMPEST_CONFIG boto aki_manifest cirros-${CIRROS_VERSION}-x86_64-vmlinuz.manifest.xml
+ iniset $TEMPEST_CONFIG boto ari_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd.manifest.xml
+ iniset $TEMPEST_CONFIG boto ami_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img.manifest.xml
+ iniset $TEMPEST_CONFIG boto aki_manifest cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-vmlinuz.manifest.xml
iniset $TEMPEST_CONFIG boto instance_type "$boto_instance_type"
iniset $TEMPEST_CONFIG boto http_socket_timeout 30
iniset $TEMPEST_CONFIG boto ssh_user ${DEFAULT_INSTANCE_USER:-cirros}
@@ -329,10 +329,10 @@
fi
# Scenario
- iniset $TEMPEST_CONFIG scenario img_dir "$FILES/images/cirros-${CIRROS_VERSION}-x86_64-uec"
- iniset $TEMPEST_CONFIG scenario ami_img_file "cirros-${CIRROS_VERSION}-x86_64-blank.img"
- iniset $TEMPEST_CONFIG scenario ari_img_file "cirros-${CIRROS_VERSION}-x86_64-initrd"
- iniset $TEMPEST_CONFIG scenario aki_img_file "cirros-${CIRROS_VERSION}-x86_64-vmlinuz"
+ iniset $TEMPEST_CONFIG scenario img_dir "$FILES/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec"
+ iniset $TEMPEST_CONFIG scenario ami_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-blank.img"
+ iniset $TEMPEST_CONFIG scenario ari_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-initrd"
+ iniset $TEMPEST_CONFIG scenario aki_img_file "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-vmlinuz"
# Large Ops Number
iniset $TEMPEST_CONFIG scenario large_ops_number ${TEMPEST_LARGE_OPS_NUMBER:-0}
@@ -354,7 +354,7 @@
fi
if [ $TEMPEST_VOLUME_DRIVER != "default" ]; then
- iniset $TEMPEST_CONFIG volume vendor_name $TEMPEST_VOLUME_VENDOR
+ iniset $TEMPEST_CONFIG volume vendor_name "$TEMPEST_VOLUME_VENDOR"
iniset $TEMPEST_CONFIG volume storage_protocol $TEMPEST_STORAGE_PROTOCOL
fi
@@ -397,16 +397,9 @@
if is_service_enabled tempest; then
# Tempest has some tests that validate various authorization checks
# between two regular users in separate tenants
- openstack project create \
- alt_demo
- openstack user create \
- --project alt_demo \
- --password "$ADMIN_PASSWORD" \
- alt_demo
- openstack role add \
- --project alt_demo \
- --user alt_demo \
- Member
+ get_or_create_project alt_demo
+ get_or_create_user alt_demo "$ADMIN_PASSWORD" alt_demo "alt_demo@example.com"
+ get_or_add_user_role Member alt_demo alt_demo
fi
}
@@ -418,8 +411,8 @@
# init_tempest() - Initialize ec2 images
function init_tempest {
- local base_image_name=cirros-${CIRROS_VERSION}-x86_64
- # /opt/stack/devstack/files/images/cirros-${CIRROS_VERSION}-x86_64-uec
+ local base_image_name=cirros-${CIRROS_VERSION}-${CIRROS_ARCH}
+ # /opt/stack/devstack/files/images/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec
local image_dir="$FILES/images/${base_image_name}-uec"
local kernel="$image_dir/${base_image_name}-vmlinuz"
local ramdisk="$image_dir/${base_image_name}-initrd"
@@ -431,9 +424,9 @@
( #new namespace
# tenant:demo ; user: demo
source $TOP_DIR/accrc/demo/demo
- euca-bundle-image -r x86_64 -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
- euca-bundle-image -r x86_64 -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
- euca-bundle-image -r x86_64 -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
+ euca-bundle-image -r ${CIRROS_ARCH} -i "$kernel" --kernel true -d "$BOTO_MATERIALS_PATH"
+ euca-bundle-image -r ${CIRROS_ARCH} -i "$ramdisk" --ramdisk true -d "$BOTO_MATERIALS_PATH"
+ euca-bundle-image -r ${CIRROS_ARCH} -i "$disk_image" -d "$BOTO_MATERIALS_PATH"
) 2>&1 </dev/null | cat
else
echo "Boto materials are not prepared"
diff --git a/lib/tls b/lib/tls
index 02906b7..e58e513 100644
--- a/lib/tls
+++ b/lib/tls
@@ -18,6 +18,9 @@
# - configure_proxy
# - start_tls_proxy
+# - stop_tls_proxy
+# - cleanup_CA
+
# - make_root_CA
# - make_int_CA
# - make_cert ca-dir cert-name "common-name" ["alt-name" ...]
@@ -320,7 +323,8 @@
#
# Uses global ``SSL_ENABLED_SERVICES``
function is_ssl_enabled_service {
- services=$@
+ local services=$@
+ local service=""
for service in ${services}; do
[[ ,${SSL_ENABLED_SERVICES}, =~ ,${service}, ]] && return 0
done
@@ -372,6 +376,22 @@
}
+# Cleanup Functions
+# ===============
+
+
+# Stops all stud processes. This should be done only after all services
+# using tls configuration are down.
+function stop_tls_proxy {
+ killall stud
+}
+
+
+# Remove CA along with configuration, as well as the local server certificate
+function cleanup_CA {
+ rm -rf "$DATA_DIR/CA" "$DEVSTACK_CERT"
+}
+
# Tell emacs to use shell-script-mode
## Local variables:
## mode: shell-script
diff --git a/lib/trove b/lib/trove
index 98ab56d..6877d0f 100644
--- a/lib/trove
+++ b/lib/trove
@@ -81,28 +81,20 @@
SERVICE_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "trove" ]]; then
- TROVE_USER=$(openstack user create \
- trove \
- --password "$SERVICE_PASSWORD" \
- --project $SERVICE_TENANT \
- --email trove@example.com \
- | grep " id " | get_field 2)
- openstack role add \
- $SERVICE_ROLE \
- --project $SERVICE_TENANT \
- --user $TROVE_USER
+
+ TROVE_USER=$(get_or_create_user "trove" \
+ "$SERVICE_PASSWORD" $SERVICE_TENANT)
+ get_or_add_user_role $SERVICE_ROLE $TROVE_USER $SERVICE_TENANT
+
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
- TROVE_SERVICE=$(openstack service create \
- trove \
- --type=database \
- --description="Trove Service" \
- | grep " id " | get_field 2)
- openstack endpoint create \
- $TROVE_SERVICE \
- --region RegionOne \
- --publicurl "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
- --adminurl "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
- --internalurl "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s"
+
+ TROVE_SERVICE=$(get_or_create_service "trove" \
+ "database" "Trove Service")
+ get_or_create_endpoint $TROVE_SERVICE \
+ "$REGION_NAME" \
+ "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
+ "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s" \
+ "http://$SERVICE_HOST:8779/v1.0/\$(tenant_id)s"
fi
fi
}
@@ -188,6 +180,7 @@
iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT nova_proxy_admin_pass $RADMIN_USER_PASS
iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT trove_auth_url $TROVE_AUTH_ENDPOINT
iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT control_exchange trove
+ iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT ignore_users os_admin
iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT log_dir /tmp/
iniset $TROVE_CONF_DIR/trove-guestagent.conf DEFAULT log_file trove-guestagent.log
setup_trove_logging $TROVE_CONF_DIR/trove-guestagent.conf
@@ -211,10 +204,21 @@
# Initialize the trove database
$TROVE_BIN_DIR/trove-manage db_sync
- # Upload the trove-guest image to glance
- TROVE_GUEST_IMAGE_ID=$(upload_image $TROVE_GUEST_IMAGE_URL $TOKEN | grep ' id ' | get_field 2)
+ # If no guest image is specified, skip remaining setup
+ [ -z "$TROVE_GUEST_IMAGE_URL"] && return 0
- # Initialize appropriate datastores / datastore versions
+ # Find the glance id for the trove guest image
+ # The image is uploaded by stack.sh -- see $IMAGE_URLS handling
+ GUEST_IMAGE_NAME=$(basename "$TROVE_GUEST_IMAGE_URL")
+ GUEST_IMAGE_NAME=${GUEST_IMAGE_NAME%.*}
+ TROVE_GUEST_IMAGE_ID=$(glance --os-auth-token $TOKEN --os-image-url http://$GLANCE_HOSTPORT image-list | grep "${GUEST_IMAGE_NAME}" | get_field 1)
+ if [ -z "$TROVE_GUEST_IMAGE_ID" ]; then
+ # If no glance id is found, skip remaining setup
+ echo "Datastore ${TROVE_DATASTORE_TYPE} will not be created: guest image ${GUEST_IMAGE_NAME} not found."
+ return 1
+ fi
+
+ # Now that we have the guest image id, initialize appropriate datastores / datastore versions
$TROVE_BIN_DIR/trove-manage datastore_update "$TROVE_DATASTORE_TYPE" ""
$TROVE_BIN_DIR/trove-manage datastore_version_update "$TROVE_DATASTORE_TYPE" "$TROVE_DATASTORE_VERSION" "$TROVE_DATASTORE_TYPE" \
"$TROVE_GUEST_IMAGE_ID" "$TROVE_DATASTORE_PACKAGE" 1
diff --git a/openrc b/openrc
index fc066ad..aec8a2a 100644
--- a/openrc
+++ b/openrc
@@ -53,12 +53,16 @@
# easier with this off.
export OS_NO_CACHE=${OS_NO_CACHE:-1}
+# Region
+export OS_REGION_NAME=${REGION_NAME:-RegionOne}
+
# Set api HOST_IP endpoint. SERVICE_HOST may also be used to specify the endpoint,
# which is convenient for some localrc configurations.
HOST_IP=${HOST_IP:-127.0.0.1}
SERVICE_HOST=${SERVICE_HOST:-$HOST_IP}
SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
KEYSTONE_AUTH_PROTOCOL=${KEYSTONE_AUTH_PROTOCOL:-$SERVICE_PROTOCOL}
+KEYSTONE_AUTH_HOST=${KEYSTONE_AUTH_HOST:-$SERVICE_HOST}
# Some exercises call glance directly. On a single-node installation, Glance
# should be listening on HOST_IP. If its running elsewhere, it can be set here
@@ -72,7 +76,7 @@
# the user/tenant has access to - including nova, glance, keystone, swift, ...
# We currently recommend using the 2.0 *identity api*.
#
-export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$SERVICE_HOST:5000/v${OS_IDENTITY_API_VERSION}
+export OS_AUTH_URL=$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:5000/v${OS_IDENTITY_API_VERSION}
# Set the pointer to our CA certificate chain. Harmless if TLS is not used.
export OS_CACERT=${OS_CACERT:-$INT_CA_DIR/ca-chain.pem}
diff --git a/samples/local.conf b/samples/local.conf
index c8126c2..20c5892 100644
--- a/samples/local.conf
+++ b/samples/local.conf
@@ -24,8 +24,10 @@
# While ``stack.sh`` is happy to run without ``localrc``, devlife is better when
# there are a few minimal variables set:
-# If the ``*_PASSWORD`` variables are not set here you will be prompted to enter
-# values for them by ``stack.sh`` and they will be added to ``local.conf``.
+# If the ``SERVICE_TOKEN`` and ``*_PASSWORD`` variables are not set
+# here you will be prompted to enter values for them by ``stack.sh``
+# and they will be added to ``local.conf``.
+SERVICE_TOKEN=azertytoken
ADMIN_PASSWORD=nomoresecrete
MYSQL_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
diff --git a/stack.sh b/stack.sh
index e58436d..03ecf28 100755
--- a/stack.sh
+++ b/stack.sh
@@ -95,7 +95,7 @@
# ``stackrc`` sources ``localrc`` to allow you to safely override those settings.
if [[ ! -r $TOP_DIR/stackrc ]]; then
- log_error $LINENO "missing $TOP_DIR/stackrc - did you grab more than just stack.sh?"
+ die $LINENO "missing $TOP_DIR/stackrc - did you grab more than just stack.sh?"
fi
source $TOP_DIR/stackrc
@@ -122,13 +122,13 @@
# templates and other useful files in the ``files`` subdirectory
FILES=$TOP_DIR/files
if [ ! -d $FILES ]; then
- log_error $LINENO "missing devstack/files"
+ die $LINENO "missing devstack/files"
fi
# ``stack.sh`` keeps function libraries here
# Make sure ``$TOP_DIR/lib`` directory is present
if [ ! -d $TOP_DIR/lib ]; then
- log_error $LINENO "missing devstack/lib"
+ die $LINENO "missing devstack/lib"
fi
# Import common services (database, message queue) configuration
@@ -152,7 +152,7 @@
# Look for obsolete stuff
if [[ ,${ENABLED_SERVICES}, =~ ,"swift", ]]; then
echo "FATAL: 'swift' is not supported as a service name"
- echo "FATAL: Use the actual swift service names to enable tham as required:"
+ echo "FATAL: Use the actual swift service names to enable them as required:"
echo "FATAL: s-proxy s-object s-container s-account"
exit 1
fi
@@ -219,15 +219,6 @@
# Some distros need to add repos beyond the defaults provided by the vendor
# to pick up required packages.
-# The Debian Wheezy official repositories do not contain all required packages,
-# add gplhost repository.
-if [[ "$os_VENDOR" =~ (Debian) ]]; then
- echo 'deb http://archive.gplhost.com/debian grizzly main' | sudo tee /etc/apt/sources.list.d/gplhost_wheezy-backports.list
- echo 'deb http://archive.gplhost.com/debian grizzly-backports main' | sudo tee -a /etc/apt/sources.list.d/gplhost_wheezy-backports.list
- apt_get update
- apt_get install --force-yes gplhost-archive-keyring
-fi
-
if [[ is_fedora && $DISTRO =~ (rhel) ]]; then
# Installing Open vSwitch on RHEL requires enabling the RDO repo.
RHEL6_RDO_REPO_RPM=${RHEL6_RDO_REPO_RPM:-"http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm"}
@@ -317,9 +308,6 @@
# Allow the use of an alternate hostname (such as localhost/127.0.0.1) for service endpoints.
SERVICE_HOST=${SERVICE_HOST:-$HOST_IP}
-# Allow the use of an alternate protocol (such as https) for service endpoints
-SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
-
# Configure services to use syslog instead of writing to individual log files
SYSLOG=`trueorfalse False $SYSLOG`
SYSLOG_HOST=${SYSLOG_HOST:-$HOST_IP}
@@ -530,6 +518,12 @@
echo $@ >&3
}
+if [[ is_fedora && $DISTRO =~ (rhel) ]]; then
+ # poor old python2.6 doesn't have argparse by default, which
+ # outfilter.py uses
+ is_package_installed python-argparse || install_package python-argparse
+fi
+
# Set up logging for ``stack.sh``
# Set ``LOGFILE`` to turn on logging
# Append '.xxxxxxxx' to the given name to maintain history
@@ -662,12 +656,24 @@
# Configure an appropriate python environment
if [[ "$OFFLINE" != "True" ]]; then
- $TOP_DIR/tools/install_pip.sh
+ PYPI_ALTERNATIVE_URL=$PYPI_ALTERNATIVE_URL $TOP_DIR/tools/install_pip.sh
fi
-# Do the ugly hacks for borken packages and distros
+# Do the ugly hacks for broken packages and distros
$TOP_DIR/tools/fixup_stuff.sh
+
+# Extras Pre-install
+# ------------------
+
+# Phase: pre-install
+if [[ -d $TOP_DIR/extras.d ]]; then
+ for i in $TOP_DIR/extras.d/*.sh; do
+ [[ -r $i ]] && source $i stack pre-install
+ done
+fi
+
+
install_rpc_backend
if is_service_enabled $DATABASE_BACKENDS; then
@@ -729,8 +735,10 @@
setup_develop $OPENSTACKCLIENT_DIR
if is_service_enabled key; then
- install_keystone
- configure_keystone
+ if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then
+ install_keystone
+ configure_keystone
+ fi
fi
if is_service_enabled s-proxy; then
@@ -782,7 +790,6 @@
install_ceilometer
echo_summary "Configuring Ceilometer"
configure_ceilometer
- configure_ceilometerclient
fi
if is_service_enabled heat; then
@@ -929,8 +936,11 @@
if is_service_enabled key; then
echo_summary "Starting Keystone"
- init_keystone
- start_keystone
+
+ if [ "$KEYSTONE_AUTH_HOST" == "$SERVICE_HOST" ]; then
+ init_keystone
+ start_keystone
+ fi
# Set up a temporary admin URI for Keystone
SERVICE_ENDPOINT=$KEYSTONE_AUTH_URI/v2.0
@@ -971,6 +981,7 @@
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASSWORD
+ export OS_REGION_NAME=$REGION_NAME
fi
@@ -1227,6 +1238,7 @@
if is_service_enabled cinder; then
echo_summary "Starting Cinder"
start_cinder
+ create_volume_types
fi
if is_service_enabled ceilometer; then
echo_summary "Starting Ceilometer"
@@ -1380,41 +1392,6 @@
echo_summary "WARNING: $DEPRECATED_TEXT"
fi
-# TODO(dtroyer): Remove EXTRA_OPTS after stable/icehouse branch is cut
-# Specific warning for deprecated configs
-if [[ -n "$EXTRA_OPTS" ]]; then
- echo ""
- echo_summary "WARNING: EXTRA_OPTS is used"
- echo "You are using EXTRA_OPTS to pass configuration into nova.conf."
- echo "Please convert that configuration in localrc to a nova.conf section in local.conf:"
- echo "EXTRA_OPTS will be removed early in the Juno development cycle"
- echo "
-[[post-config|\$NOVA_CONF]]
-[DEFAULT]
-"
- for I in "${EXTRA_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- echo ${I}
- done
-fi
-
-# TODO(dtroyer): Remove EXTRA_BAREMETAL_OPTS after stable/icehouse branch is cut
-if [[ -n "$EXTRA_BAREMETAL_OPTS" ]]; then
- echo ""
- echo_summary "WARNING: EXTRA_BAREMETAL_OPTS is used"
- echo "You are using EXTRA_BAREMETAL_OPTS to pass configuration into nova.conf."
- echo "Please convert that configuration in localrc to a nova.conf section in local.conf:"
- echo "EXTRA_BAREMETAL_OPTS will be removed early in the Juno development cycle"
- echo "
-[[post-config|\$NOVA_CONF]]
-[baremetal]
-"
- for I in "${EXTRA_BAREMETAL_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- echo ${I}
- done
-fi
-
# TODO(dtroyer): Remove Q_AGENT_EXTRA_AGENT_OPTS after stable/juno branch is cut
if [[ -n "$Q_AGENT_EXTRA_AGENT_OPTS" ]]; then
echo ""
@@ -1449,38 +1426,17 @@
done
fi
-# TODO(dtroyer): Remove Q_DHCP_EXTRA_DEFAULT_OPTS after stable/icehouse branch is cut
-if [[ -n "$Q_DHCP_EXTRA_DEFAULT_OPTS" ]]; then
+# TODO(dtroyer): Remove CINDER_MULTI_LVM_BACKEND after stable/juno branch is cut
+if [[ "$CINDER_MULTI_LVM_BACKEND" = "True" ]]; then
echo ""
- echo_summary "WARNING: Q_DHCP_EXTRA_DEFAULT_OPTS is used"
- echo "You are using Q_DHCP_EXTRA_DEFAULT_OPTS to pass configuration into $Q_DHCP_CONF_FILE."
- echo "Please convert that configuration in localrc to a $Q_DHCP_CONF_FILE section in local.conf:"
- echo "Q_DHCP_EXTRA_DEFAULT_OPTS will be removed early in the Juno development cycle"
+ echo_summary "WARNING: CINDER_MULTI_LVM_BACKEND is used"
+ echo "You are using CINDER_MULTI_LVM_BACKEND to configure Cinder's multiple LVM backends"
+ echo "Please convert that configuration in local.conf to use CINDER_ENABLED_BACKENDS."
+ echo "CINDER_ENABLED_BACKENDS will be removed early in the 'K' development cycle"
echo "
-[[post-config|/\$Q_DHCP_CONF_FILE]]
-[DEFAULT]
+[[local|localrc]]
+CINDER_ENABLED_BACKENDS=lvm:lvmdriver-1,lvm:lvmdriver-2
"
- for I in "${Q_DHCP_EXTRA_DEFAULT_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- echo ${I}
- done
-fi
-
-# TODO(dtroyer): Remove Q_SRV_EXTRA_DEFAULT_OPTS after stable/icehouse branch is cut
-if [[ -n "$Q_SRV_EXTRA_DEFAULT_OPTS" ]]; then
- echo ""
- echo_summary "WARNING: Q_SRV_EXTRA_DEFAULT_OPTS is used"
- echo "You are using Q_SRV_EXTRA_DEFAULT_OPTS to pass configuration into $NEUTRON_CONF."
- echo "Please convert that configuration in localrc to a $NEUTRON_CONF section in local.conf:"
- echo "Q_SRV_EXTRA_DEFAULT_OPTS will be removed early in the Juno development cycle"
- echo "
-[[post-config|\$NEUTRON_CONF]]
-[DEFAULT]
-"
- for I in "${Q_SRV_EXTRA_DEFAULT_OPTS[@]}"; do
- # Replace the first '=' with ' ' for iniset syntax
- echo ${I}
- done
fi
# Indicate how long this took to run (bash maintained variable ``SECONDS``)
diff --git a/stackrc b/stackrc
index 3923d38..fb84366 100644
--- a/stackrc
+++ b/stackrc
@@ -19,6 +19,9 @@
STACK_USER=$(whoami)
fi
+# Specify region name Region
+REGION_NAME=${REGION_NAME:-RegionOne}
+
# Specify which services to launch. These generally correspond to
# screen tabs. To change the default list, use the ``enable_service`` and
# ``disable_service`` functions in ``local.conf``.
@@ -49,6 +52,12 @@
ENABLED_SERVICES+=,rabbit,tempest,mysql
fi
+# Global toggle for enabling services under mod_wsgi. If this is set to
+# ``True`` all services that use HTTPD + mod_wsgi as the preferred method of
+# deployment, will be deployed under Apache. If this is set to ``False`` all
+# services will rely on the local toggle variable (e.g. ``KEYSTONE_USE_MOD_WSGI``)
+ENABLE_HTTPD_MOD_WSGI_SERVICES=True
+
# Tell Tempest which services are available. The default is set here as
# Tempest falls late in the configuration sequence. This differs from
# ``ENABLED_SERVICES`` in that the project names are used here rather than
@@ -183,6 +192,10 @@
OSLODB_REPO=${OSLODB_REPO:-${GIT_BASE}/openstack/oslo.db.git}
OSLODB_BRANCH=${OSLODB_BRANCH:-master}
+# oslo.i18n
+OSLOI18N_REPO=${OSLOI18N_REPO:-${GIT_BASE}/openstack/oslo.i18n.git}
+OSLOI18N_BRANCH=${OSLOI18N_BRANCH:-master}
+
# oslo.messaging
OSLOMSG_REPO=${OSLOMSG_REPO:-${GIT_BASE}/openstack/oslo.messaging.git}
OSLOMSG_BRANCH=${OSLOMSG_BRANCH:-master}
@@ -319,14 +332,15 @@
# glance as a disk image. If it ends in .gz, it is uncompressed first.
# example:
# http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-armel-disk1.img
-# http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-rootfs.img.gz
+# http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs.img.gz
# * OpenVZ image:
# OpenVZ uses its own format of image, and does not support UEC style images
#IMAGE_URLS="http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-11.2_2.6.35-15_1.tar.gz" # old ttylinux-uec image
-#IMAGE_URLS="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-disk.img" # cirros full disk image
+#IMAGE_URLS="http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-disk.img" # cirros full disk image
CIRROS_VERSION=${CIRROS_VERSION:-"0.3.2"}
+CIRROS_ARCH=${CIRROS_ARCH:-"x86_64"}
# Set default image based on ``VIRT_DRIVER`` and ``LIBVIRT_TYPE``, either of
# which may be set in ``localrc``. Also allow ``DEFAULT_IMAGE_NAME`` and
@@ -338,11 +352,11 @@
libvirt)
case "$LIBVIRT_TYPE" in
lxc) # the cirros root disk in the uec tarball is empty, so it will not work for lxc
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-x86_64-rootfs}
- IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-rootfs.img.gz"};;
+ DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs}
+ IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-rootfs.img.gz"};;
*) # otherwise, use the uec style image (with kernel, ramdisk, disk)
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-x86_64-uec}
- IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz"};;
+ DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+ IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz"};;
esac
;;
vsphere)
@@ -350,10 +364,11 @@
IMAGE_URLS=${IMAGE_URLS:-"http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk"};;
xenserver)
DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-0.3.0-x86_64-disk}
- IMAGE_URLS=${IMAGE_URLS:-"https://github.com/downloads/citrix-openstack/warehouse/cirros-0.3.0-x86_64-disk.vhd.tgz"};;
+ IMAGE_URLS=${IMAGE_URLS:-"https://github.com/downloads/citrix-openstack/warehouse/cirros-0.3.0-x86_64-disk.vhd.tgz"}
+ IMAGE_URLS+=",http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz";;
*) # Default to Cirros with kernel, ramdisk and disk image
- DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-x86_64-uec}
- IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-x86_64-uec.tar.gz"};;
+ DEFAULT_IMAGE_NAME=${DEFAULT_IMAGE_NAME:-cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec}
+ IMAGE_URLS=${IMAGE_URLS:-"http://download.cirros-cloud.net/${CIRROS_VERSION}/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-uec.tar.gz"};;
esac
# Use 64bit fedora image if heat is enabled
@@ -371,7 +386,7 @@
# Trove needs a custom image for it's work
if [[ "$ENABLED_SERVICES" =~ 'tr-api' ]]; then
case "$VIRT_DRIVER" in
- libvirt|baremetal|ironic)
+ libvirt|baremetal|ironic|xenapi)
TROVE_GUEST_IMAGE_URL=${TROVE_GUEST_IMAGE_URL:-"http://tarballs.openstack.org/trove/images/ubuntu_mysql.qcow2/ubuntu_mysql.qcow2"}
IMAGE_URLS+=",${TROVE_GUEST_IMAGE_URL}"
;;
@@ -394,8 +409,7 @@
# 10Gb default volume backing file size
VOLUME_BACKING_FILE_SIZE=${VOLUME_BACKING_FILE_SIZE:-10250M}
-# Name of the LVM volume group to use/create for iscsi volumes
-VOLUME_GROUP=${VOLUME_GROUP:-stack-volumes}
+# Prefixes for volume and instance names
VOLUME_NAME_PREFIX=${VOLUME_NAME_PREFIX:-volume-}
INSTANCE_NAME_PREFIX=${INSTANCE_NAME_PREFIX:-instance-}
@@ -418,6 +432,9 @@
# Undo requirements changes by global requirements
UNDO_REQUIREMENTS=${UNDO_REQUIREMENTS:-True}
+# Allow the use of an alternate protocol (such as https) for service endpoints
+SERVICE_PROTOCOL=${SERVICE_PROTOCOL:-http}
+
# Local variables:
# mode: shell-script
# End:
diff --git a/tools/build_bm.sh b/tools/build_bm.sh
deleted file mode 100755
index ab0ba0e..0000000
--- a/tools/build_bm.sh
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/usr/bin/env bash
-
-# **build_bm.sh**
-
-# Build an OpenStack install on a bare metal machine.
-set +x
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
-
-# Import common functions
-source $TOP_DIR/functions
-
-# Source params
-source ./stackrc
-
-# Param string to pass to stack.sh. Like "EC2_DMZ_HOST=192.168.1.1 MYSQL_USER=nova"
-STACKSH_PARAMS=${STACKSH_PARAMS:-}
-
-# Option to use the version of devstack on which we are currently working
-USE_CURRENT_DEVSTACK=${USE_CURRENT_DEVSTACK:-1}
-
-# Configure the runner
-RUN_SH=`mktemp`
-cat > $RUN_SH <<EOF
-#!/usr/bin/env bash
-# Install and run stack.sh
-cd devstack
-$STACKSH_PARAMS ./stack.sh
-EOF
-
-# Make the run.sh executable
-chmod 755 $RUN_SH
-
-scp -r . root@$CONTAINER_IP:devstack
-scp $RUN_SH root@$CONTAINER_IP:$RUN_SH
-ssh root@$CONTAINER_IP $RUN_SH
diff --git a/tools/build_bm_multi.sh b/tools/build_bm_multi.sh
deleted file mode 100755
index 328d576..0000000
--- a/tools/build_bm_multi.sh
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env bash
-
-# **build_bm_multi.sh**
-
-# Build an OpenStack install on several bare metal machines.
-SHELL_AFTER_RUN=no
-
-# Variables common amongst all hosts in the cluster
-COMMON_VARS="MYSQL_HOST=$HEAD_HOST RABBIT_HOST=$HEAD_HOST GLANCE_HOSTPORT=$HEAD_HOST:9292 NETWORK_MANAGER=FlatDHCPManager FLAT_INTERFACE=eth0 FLOATING_RANGE=$FLOATING_RANGE MULTI_HOST=1 SHELL_AFTER_RUN=$SHELL_AFTER_RUN"
-
-# Helper to launch containers
-function run_bm {
- # For some reason container names with periods can cause issues :/
- CONTAINER=$1 CONTAINER_IP=$2 CONTAINER_NETMASK=$NETMASK CONTAINER_GATEWAY=$GATEWAY NAMESERVER=$NAMESERVER TERMINATE=$TERMINATE STACKSH_PARAMS="$COMMON_VARS $3" ./tools/build_bm.sh
-}
-
-# Launch the head node - headnode uses a non-ip domain name,
-# because rabbit won't launch with an ip addr hostname :(
-run_bm STACKMASTER $HEAD_HOST "ENABLED_SERVICES=g-api,g-reg,key,n-api,n-sch,n-vnc,horizon,mysql,rabbit"
-
-# Wait till the head node is up
-if [ ! "$TERMINATE" = "1" ]; then
- echo "Waiting for head node ($HEAD_HOST) to start..."
- if ! timeout 60 sh -c "while ! wget -q -O- http://$HEAD_HOST | grep -q username; do sleep 1; done"; then
- echo "Head node did not start"
- exit 1
- fi
-fi
-
-PIDS=""
-# Launch the compute hosts in parallel
-for compute_host in ${COMPUTE_HOSTS//,/ }; do
- run_bm $compute_host $compute_host "ENABLED_SERVICES=n-cpu,n-net,n-api" &
- PIDS="$PIDS $!"
-done
-
-for x in $PIDS; do
- wait $x
-done
-echo "build_bm_multi complete"
diff --git a/tools/build_docs.sh b/tools/build_docs.sh
index 77c2f4e..e999eff 100755
--- a/tools/build_docs.sh
+++ b/tools/build_docs.sh
@@ -113,6 +113,15 @@
# Get repo static
cp -pr $FQ_DOCS_SOURCE/* $FQ_HTML_BUILD
+# Insert automated bits
+GLOG=$(mktemp gitlogXXXX)
+git log \
+ --pretty=format:' <li>%s - <em>Commit <a href="https://review.openstack.org/#q,%h,n,z">%h</a> %cd</em></li>' \
+ --date=short \
+ --since '6 months ago' | grep -v Merge >$GLOG
+sed -e $"/%GIT_LOG%/r $GLOG" $FQ_DOCS_SOURCE/changes.html >$FQ_HTML_BUILD/changes.html
+rm -f $GLOG
+
# Build list of scripts to process
FILES=""
for f in $(find . -name .git -prune -o \( -type f -name \*.sh -not -path \*shocco/\* -print \)); do
diff --git a/tools/build_pxe_env.sh b/tools/build_pxe_env.sh
deleted file mode 100755
index 50d91d0..0000000
--- a/tools/build_pxe_env.sh
+++ /dev/null
@@ -1,120 +0,0 @@
-#!/bin/bash -e
-
-# **build_pxe_env.sh**
-
-# Create a PXE boot environment
-#
-# build_pxe_env.sh destdir
-#
-# Requires Ubuntu Oneiric
-#
-# Only needs to run as root if the destdir permissions require it
-
-dpkg -l syslinux || apt-get install -y syslinux
-
-DEST_DIR=${1:-/tmp}/tftpboot
-PXEDIR=${PXEDIR:-/opt/ramstack/pxe}
-PROGDIR=`dirname $0`
-
-# Clean up any resources that may be in use
-function cleanup {
- set +o errexit
-
- # Mop up temporary files
- if [ -n "$MNTDIR" -a -d "$MNTDIR" ]; then
- umount $MNTDIR
- rmdir $MNTDIR
- fi
-
- # Kill ourselves to signal any calling process
- trap 2; kill -2 $$
-}
-
-trap cleanup SIGHUP SIGINT SIGTERM SIGQUIT EXIT
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=`cd $TOOLS_DIR/..; pwd`
-
-mkdir -p $DEST_DIR/pxelinux.cfg
-cd $DEST_DIR
-for i in memdisk menu.c32 pxelinux.0; do
- cp -pu /usr/lib/syslinux/$i $DEST_DIR
-done
-
-CFG=$DEST_DIR/pxelinux.cfg/default
-cat >$CFG <<EOF
-default menu.c32
-prompt 0
-timeout 0
-
-MENU TITLE devstack PXE Boot Menu
-
-EOF
-
-# Setup devstack boot
-mkdir -p $DEST_DIR/ubuntu
-if [ ! -d $PXEDIR ]; then
- mkdir -p $PXEDIR
-fi
-
-# Get image into place
-if [ ! -r $PXEDIR/stack-initrd.img ]; then
- cd $TOP_DIR
- $PROGDIR/build_ramdisk.sh $PXEDIR/stack-initrd.img
-fi
-if [ ! -r $PXEDIR/stack-initrd.gz ]; then
- gzip -1 -c $PXEDIR/stack-initrd.img >$PXEDIR/stack-initrd.gz
-fi
-cp -pu $PXEDIR/stack-initrd.gz $DEST_DIR/ubuntu
-
-if [ ! -r $PXEDIR/vmlinuz-*-generic ]; then
- MNTDIR=`mktemp -d --tmpdir mntXXXXXXXX`
- mount -t ext4 -o loop $PXEDIR/stack-initrd.img $MNTDIR
-
- if [ ! -r $MNTDIR/boot/vmlinuz-*-generic ]; then
- echo "No kernel found"
- umount $MNTDIR
- rmdir $MNTDIR
- exit 1
- else
- cp -pu $MNTDIR/boot/vmlinuz-*-generic $PXEDIR
- fi
- umount $MNTDIR
- rmdir $MNTDIR
-fi
-
-# Get generic kernel version
-KNAME=`basename $PXEDIR/vmlinuz-*-generic`
-KVER=${KNAME#vmlinuz-}
-cp -pu $PXEDIR/vmlinuz-$KVER $DEST_DIR/ubuntu
-cat >>$CFG <<EOF
-
-LABEL devstack
- MENU LABEL ^devstack
- MENU DEFAULT
- KERNEL ubuntu/vmlinuz-$KVER
- APPEND initrd=ubuntu/stack-initrd.gz ramdisk_size=2109600 root=/dev/ram0
-EOF
-
-# Get Ubuntu
-if [ -d $PXEDIR -a -r $PXEDIR/natty-base-initrd.gz ]; then
- cp -pu $PXEDIR/natty-base-initrd.gz $DEST_DIR/ubuntu
- cat >>$CFG <<EOF
-
-LABEL ubuntu
- MENU LABEL ^Ubuntu Natty
- KERNEL ubuntu/vmlinuz-$KVER
- APPEND initrd=ubuntu/natty-base-initrd.gz ramdisk_size=419600 root=/dev/ram0
-EOF
-fi
-
-# Local disk boot
-cat >>$CFG <<EOF
-
-LABEL local
- MENU LABEL ^Local disk
- LOCALBOOT 0
-EOF
-
-trap cleanup SIGHUP SIGINT SIGTERM SIGQUIT EXIT
diff --git a/tools/build_ramdisk.sh b/tools/build_ramdisk.sh
deleted file mode 100755
index 50ba8ef..0000000
--- a/tools/build_ramdisk.sh
+++ /dev/null
@@ -1,230 +0,0 @@
-#!/bin/bash
-
-# **build_ramdisk.sh**
-
-# Build RAM disk images
-
-# Exit on error to stop unexpected errors
-set -o errexit
-
-if [ ! "$#" -eq "1" ]; then
- echo "$0 builds a gziped Ubuntu OpenStack install"
- echo "usage: $0 dest"
- exit 1
-fi
-
-# Clean up any resources that may be in use
-function cleanup {
- set +o errexit
-
- # Mop up temporary files
- if [ -n "$MNTDIR" -a -d "$MNTDIR" ]; then
- umount $MNTDIR
- rmdir $MNTDIR
- fi
- if [ -n "$DEV_FILE_TMP" -a -e "$DEV_FILE_TMP" ]; then
- rm -f $DEV_FILE_TMP
- fi
- if [ -n "$IMG_FILE_TMP" -a -e "$IMG_FILE_TMP" ]; then
- rm -f $IMG_FILE_TMP
- fi
-
- # Release NBD devices
- if [ -n "$NBD" ]; then
- qemu-nbd -d $NBD
- fi
-
- # Kill ourselves to signal any calling process
- trap 2; kill -2 $$
-}
-
-trap cleanup SIGHUP SIGINT SIGTERM
-
-# Set up nbd
-modprobe nbd max_part=63
-
-# Echo commands
-set -o xtrace
-
-IMG_FILE=$1
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
-
-# Import common functions
-. $TOP_DIR/functions
-
-# Store cwd
-CWD=`pwd`
-
-cd $TOP_DIR
-
-# Source params
-source ./stackrc
-
-CACHEDIR=${CACHEDIR:-/opt/stack/cache}
-
-DEST=${DEST:-/opt/stack}
-
-# Configure the root password of the vm to be the same as ``ADMIN_PASSWORD``
-ROOT_PASSWORD=${ADMIN_PASSWORD:-password}
-
-# Base image (natty by default)
-DIST_NAME=${DIST_NAME:-natty}
-
-# Param string to pass to stack.sh. Like "EC2_DMZ_HOST=192.168.1.1 MYSQL_USER=nova"
-STACKSH_PARAMS=${STACKSH_PARAMS:-}
-
-# Option to use the version of devstack on which we are currently working
-USE_CURRENT_DEVSTACK=${USE_CURRENT_DEVSTACK:-1}
-
-# clean install
-if [ ! -r $CACHEDIR/$DIST_NAME-base.img ]; then
- $TOOLS_DIR/get_uec_image.sh $DIST_NAME $CACHEDIR/$DIST_NAME-base.img
-fi
-
-# Finds and returns full device path for the next available NBD device.
-# Exits script if error connecting or none free.
-# map_nbd image
-function map_nbd {
- for i in `seq 0 15`; do
- if [ ! -e /sys/block/nbd$i/pid ]; then
- NBD=/dev/nbd$i
- # Connect to nbd and wait till it is ready
- qemu-nbd -c $NBD $1
- if ! timeout 60 sh -c "while ! [ -e ${NBD}p1 ]; do sleep 1; done"; then
- echo "Couldn't connect $NBD"
- exit 1
- fi
- break
- fi
- done
- if [ -z "$NBD" ]; then
- echo "No free NBD slots"
- exit 1
- fi
- echo $NBD
-}
-
-# Prime image with as many apt as we can
-DEV_FILE=$CACHEDIR/$DIST_NAME-dev.img
-DEV_FILE_TMP=`mktemp $DEV_FILE.XXXXXX`
-if [ ! -r $DEV_FILE ]; then
- cp -p $CACHEDIR/$DIST_NAME-base.img $DEV_FILE_TMP
-
- NBD=`map_nbd $DEV_FILE_TMP`
- MNTDIR=`mktemp -d --tmpdir mntXXXXXXXX`
- mount -t ext4 ${NBD}p1 $MNTDIR
- cp -p /etc/resolv.conf $MNTDIR/etc/resolv.conf
-
- chroot $MNTDIR apt-get install -y --download-only `cat files/apts/* | grep NOPRIME | cut -d\# -f1`
- chroot $MNTDIR apt-get install -y --force-yes `cat files/apts/* | grep -v NOPRIME | cut -d\# -f1`
-
- # Create a stack user that is a member of the libvirtd group so that stack
- # is able to interact with libvirt.
- chroot $MNTDIR groupadd libvirtd
- chroot $MNTDIR useradd $STACK_USER -s /bin/bash -d $DEST -G libvirtd
- mkdir -p $MNTDIR/$DEST
- chroot $MNTDIR chown $STACK_USER $DEST
-
- # A simple password - pass
- echo $STACK_USER:pass | chroot $MNTDIR chpasswd
- echo root:$ROOT_PASSWORD | chroot $MNTDIR chpasswd
-
- # And has sudo ability (in the future this should be limited to only what
- # stack requires)
- echo "$STACK_USER ALL=(ALL) NOPASSWD: ALL" >> $MNTDIR/etc/sudoers
-
- umount $MNTDIR
- rmdir $MNTDIR
- qemu-nbd -d $NBD
- NBD=""
- mv $DEV_FILE_TMP $DEV_FILE
-fi
-rm -f $DEV_FILE_TMP
-
-
-# Clone git repositories onto the system
-# ======================================
-
-IMG_FILE_TMP=`mktemp $IMG_FILE.XXXXXX`
-
-if [ ! -r $IMG_FILE ]; then
- NBD=`map_nbd $DEV_FILE`
-
- # Pre-create the image file
- # FIXME(dt): This should really get the partition size to
- # pre-create the image file
- dd if=/dev/zero of=$IMG_FILE_TMP bs=1 count=1 seek=$((2*1024*1024*1024))
- # Create filesystem image for RAM disk
- dd if=${NBD}p1 of=$IMG_FILE_TMP bs=1M
-
- qemu-nbd -d $NBD
- NBD=""
- mv $IMG_FILE_TMP $IMG_FILE
-fi
-rm -f $IMG_FILE_TMP
-
-MNTDIR=`mktemp -d --tmpdir mntXXXXXXXX`
-mount -t ext4 -o loop $IMG_FILE $MNTDIR
-cp -p /etc/resolv.conf $MNTDIR/etc/resolv.conf
-
-# We need to install a non-virtual kernel and modules to boot from
-if [ ! -r "`ls $MNTDIR/boot/vmlinuz-*-generic | head -1`" ]; then
- chroot $MNTDIR apt-get install -y linux-generic
-fi
-
-git_clone $NOVA_REPO $DEST/nova $NOVA_BRANCH
-git_clone $GLANCE_REPO $DEST/glance $GLANCE_BRANCH
-git_clone $KEYSTONE_REPO $DEST/keystone $KEYSTONE_BRANCH
-git_clone $NOVNC_REPO $DEST/novnc $NOVNC_BRANCH
-git_clone $HORIZON_REPO $DEST/horizon $HORIZON_BRANCH
-git_clone $NOVACLIENT_REPO $DEST/python-novaclient $NOVACLIENT_BRANCH
-git_clone $OPENSTACKX_REPO $DEST/openstackx $OPENSTACKX_BRANCH
-
-# Use this version of devstack
-rm -rf $MNTDIR/$DEST/devstack
-cp -pr $CWD $MNTDIR/$DEST/devstack
-chroot $MNTDIR chown -R $STACK_USER $DEST/devstack
-
-# Configure host network for DHCP
-mkdir -p $MNTDIR/etc/network
-cat > $MNTDIR/etc/network/interfaces <<EOF
-auto lo
-iface lo inet loopback
-
-auto eth0
-iface eth0 inet dhcp
-EOF
-
-# Set hostname
-echo "ramstack" >$MNTDIR/etc/hostname
-echo "127.0.0.1 localhost ramstack" >$MNTDIR/etc/hosts
-
-# Configure the runner
-RUN_SH=$MNTDIR/$DEST/run.sh
-cat > $RUN_SH <<EOF
-#!/usr/bin/env bash
-
-# Get IP range
-set \`ip addr show dev eth0 | grep inet\`
-PREFIX=\`echo \$2 | cut -d. -f1,2,3\`
-export FLOATING_RANGE="\$PREFIX.224/27"
-
-# Kill any existing screens
-killall screen
-
-# Run stack.sh
-cd $DEST/devstack && \$STACKSH_PARAMS ./stack.sh > $DEST/run.sh.log
-echo >> $DEST/run.sh.log
-echo >> $DEST/run.sh.log
-echo "All done! Time to start clicking." >> $DEST/run.sh.log
-EOF
-
-# Make the run.sh executable
-chmod 755 $RUN_SH
-chroot $MNTDIR chown $STACK_USER $DEST/run.sh
-
-umount $MNTDIR
-rmdir $MNTDIR
diff --git a/tools/build_uec_ramdisk.sh b/tools/build_uec_ramdisk.sh
deleted file mode 100755
index 5f3acc5..0000000
--- a/tools/build_uec_ramdisk.sh
+++ /dev/null
@@ -1,180 +0,0 @@
-#!/usr/bin/env bash
-
-# **build_uec_ramdisk.sh**
-
-# Build RAM disk images based on UEC image
-
-# Exit on error to stop unexpected errors
-set -o errexit
-
-if [ ! "$#" -eq "1" ]; then
- echo "$0 builds a gziped Ubuntu OpenStack install"
- echo "usage: $0 dest"
- exit 1
-fi
-
-# Make sure that we have the proper version of ubuntu (only works on oneiric)
-if ! egrep -q "oneiric" /etc/lsb-release; then
- echo "This script only works with ubuntu oneiric."
- exit 1
-fi
-
-# Clean up resources that may be in use
-function cleanup {
- set +o errexit
-
- if [ -n "$MNT_DIR" ]; then
- umount $MNT_DIR/dev
- umount $MNT_DIR
- fi
-
- if [ -n "$DEST_FILE_TMP" ]; then
- rm $DEST_FILE_TMP
- fi
-
- # Kill ourselves to signal parents
- trap 2; kill -2 $$
-}
-
-trap cleanup SIGHUP SIGINT SIGTERM SIGQUIT EXIT
-
-# Output dest image
-DEST_FILE=$1
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
-
-# Import common functions
-. $TOP_DIR/functions
-
-cd $TOP_DIR
-
-# Source params
-source ./stackrc
-
-DEST=${DEST:-/opt/stack}
-
-# Ubuntu distro to install
-DIST_NAME=${DIST_NAME:-oneiric}
-
-# Configure how large the VM should be
-GUEST_SIZE=${GUEST_SIZE:-2G}
-
-# Exit on error to stop unexpected errors
-set -o errexit
-set -o xtrace
-
-# Abort if localrc is not set
-if [ ! -e $TOP_DIR/localrc ]; then
- echo "You must have a localrc with ALL necessary passwords defined before proceeding."
- echo "See stack.sh for required passwords."
- exit 1
-fi
-
-# Install deps if needed
-DEPS="kvm libvirt-bin kpartx cloud-utils curl"
-apt_get install -y --force-yes $DEPS
-
-# Where to store files and instances
-CACHEDIR=${CACHEDIR:-/opt/stack/cache}
-WORK_DIR=${WORK_DIR:-/opt/ramstack}
-
-# Where to store images
-image_dir=$WORK_DIR/images/$DIST_NAME
-mkdir -p $image_dir
-
-# Get the base image if it does not yet exist
-if [ ! -e $image_dir/disk ]; then
- $TOOLS_DIR/get_uec_image.sh -r 2000M $DIST_NAME $image_dir/disk
-fi
-
-# Configure the root password of the vm to be the same as ``ADMIN_PASSWORD``
-ROOT_PASSWORD=${ADMIN_PASSWORD:-password}
-
-# Name of our instance, used by libvirt
-GUEST_NAME=${GUEST_NAME:-devstack}
-
-# Pre-load the image with basic environment
-if [ ! -e $image_dir/disk-primed ]; then
- cp $image_dir/disk $image_dir/disk-primed
- $TOOLS_DIR/warm_apts_for_uec.sh $image_dir/disk-primed
- $TOOLS_DIR/copy_dev_environment_to_uec.sh $image_dir/disk-primed
-fi
-
-# Back to devstack
-cd $TOP_DIR
-
-DEST_FILE_TMP=`mktemp $DEST_FILE.XXXXXX`
-MNT_DIR=`mktemp -d --tmpdir mntXXXXXXXX`
-cp $image_dir/disk-primed $DEST_FILE_TMP
-mount -t ext4 -o loop $DEST_FILE_TMP $MNT_DIR
-mount -o bind /dev /$MNT_DIR/dev
-cp -p /etc/resolv.conf $MNT_DIR/etc/resolv.conf
-echo root:$ROOT_PASSWORD | chroot $MNT_DIR chpasswd
-touch $MNT_DIR/$DEST/.ramdisk
-
-# We need to install a non-virtual kernel and modules to boot from
-if [ ! -r "`ls $MNT_DIR/boot/vmlinuz-*-generic | head -1`" ]; then
- chroot $MNT_DIR apt-get install -y linux-generic
-fi
-
-git_clone $NOVA_REPO $DEST/nova $NOVA_BRANCH
-git_clone $GLANCE_REPO $DEST/glance $GLANCE_BRANCH
-git_clone $KEYSTONE_REPO $DEST/keystone $KEYSTONE_BRANCH
-git_clone $NOVNC_REPO $DEST/novnc $NOVNC_BRANCH
-git_clone $HORIZON_REPO $DEST/horizon $HORIZON_BRANCH
-git_clone $NOVACLIENT_REPO $DEST/python-novaclient $NOVACLIENT_BRANCH
-git_clone $OPENSTACKX_REPO $DEST/openstackx $OPENSTACKX_BRANCH
-git_clone $TEMPEST_REPO $DEST/tempest $TEMPEST_BRANCH
-
-# Use this version of devstack
-rm -rf $MNT_DIR/$DEST/devstack
-cp -pr $TOP_DIR $MNT_DIR/$DEST/devstack
-chroot $MNT_DIR chown -R stack $DEST/devstack
-
-# Configure host network for DHCP
-mkdir -p $MNT_DIR/etc/network
-cat > $MNT_DIR/etc/network/interfaces <<EOF
-auto lo
-iface lo inet loopback
-
-auto eth0
-iface eth0 inet dhcp
-EOF
-
-# Set hostname
-echo "ramstack" >$MNT_DIR/etc/hostname
-echo "127.0.0.1 localhost ramstack" >$MNT_DIR/etc/hosts
-
-# Configure the runner
-RUN_SH=$MNT_DIR/$DEST/run.sh
-cat > $RUN_SH <<EOF
-#!/usr/bin/env bash
-
-# Get IP range
-set \`ip addr show dev eth0 | grep inet\`
-PREFIX=\`echo \$2 | cut -d. -f1,2,3\`
-export FLOATING_RANGE="\$PREFIX.224/27"
-
-# Kill any existing screens
-killall screen
-
-# Run stack.sh
-cd $DEST/devstack && \$STACKSH_PARAMS ./stack.sh > $DEST/run.sh.log
-echo >> $DEST/run.sh.log
-echo >> $DEST/run.sh.log
-echo "All done! Time to start clicking." >> $DEST/run.sh.log
-EOF
-
-# Make the run.sh executable
-chmod 755 $RUN_SH
-chroot $MNT_DIR chown stack $DEST/run.sh
-
-umount $MNT_DIR/dev
-umount $MNT_DIR
-rmdir $MNT_DIR
-mv $DEST_FILE_TMP $DEST_FILE
-rm -f $DEST_FILE_TMP
-
-trap - SIGHUP SIGINT SIGTERM SIGQUIT EXIT
diff --git a/tools/build_usb_boot.sh b/tools/build_usb_boot.sh
deleted file mode 100755
index c97e0a1..0000000
--- a/tools/build_usb_boot.sh
+++ /dev/null
@@ -1,148 +0,0 @@
-#!/bin/bash -e
-
-# **build_usb_boot.sh**
-
-# Create a syslinux boot environment
-#
-# build_usb_boot.sh destdev
-#
-# Assumes syslinux is installed
-# Needs to run as root
-
-DEST_DIR=${1:-/tmp/syslinux-boot}
-PXEDIR=${PXEDIR:-/opt/ramstack/pxe}
-
-# Clean up any resources that may be in use
-function cleanup {
- set +o errexit
-
- # Mop up temporary files
- if [ -n "$DEST_DEV" ]; then
- umount $DEST_DIR
- rmdir $DEST_DIR
- fi
- if [ -n "$MNTDIR" -a -d "$MNTDIR" ]; then
- umount $MNTDIR
- rmdir $MNTDIR
- fi
-
- # Kill ourselves to signal any calling process
- trap 2; kill -2 $$
-}
-
-trap cleanup SIGHUP SIGINT SIGTERM SIGQUIT EXIT
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=`cd $TOOLS_DIR/..; pwd`
-
-if [ -b $DEST_DIR ]; then
- # We have a block device, install syslinux and mount it
- DEST_DEV=$DEST_DIR
- DEST_DIR=`mktemp -d --tmpdir mntXXXXXX`
- mount $DEST_DEV $DEST_DIR
-
- if [ ! -d $DEST_DIR/syslinux ]; then
- mkdir -p $DEST_DIR/syslinux
- fi
-
- # Install syslinux on the device
- syslinux --install --directory syslinux $DEST_DEV
-else
- # We have a directory (for sanity checking output)
- DEST_DEV=""
- if [ ! -d $DEST_DIR/syslinux ]; then
- mkdir -p $DEST_DIR/syslinux
- fi
-fi
-
-# Get some more stuff from syslinux
-for i in memdisk menu.c32; do
- cp -pu /usr/lib/syslinux/$i $DEST_DIR/syslinux
-done
-
-CFG=$DEST_DIR/syslinux/syslinux.cfg
-cat >$CFG <<EOF
-default /syslinux/menu.c32
-prompt 0
-timeout 0
-
-MENU TITLE devstack Boot Menu
-
-EOF
-
-# Setup devstack boot
-mkdir -p $DEST_DIR/ubuntu
-if [ ! -d $PXEDIR ]; then
- mkdir -p $PXEDIR
-fi
-
-# Get image into place
-if [ ! -r $PXEDIR/stack-initrd.img ]; then
- cd $TOP_DIR
- $TOOLS_DIR/build_uec_ramdisk.sh $PXEDIR/stack-initrd.img
-fi
-if [ ! -r $PXEDIR/stack-initrd.gz ]; then
- gzip -1 -c $PXEDIR/stack-initrd.img >$PXEDIR/stack-initrd.gz
-fi
-cp -pu $PXEDIR/stack-initrd.gz $DEST_DIR/ubuntu
-
-if [ ! -r $PXEDIR/vmlinuz-*-generic ]; then
- MNTDIR=`mktemp -d --tmpdir mntXXXXXXXX`
- mount -t ext4 -o loop $PXEDIR/stack-initrd.img $MNTDIR
-
- if [ ! -r $MNTDIR/boot/vmlinuz-*-generic ]; then
- echo "No kernel found"
- umount $MNTDIR
- rmdir $MNTDIR
- if [ -n "$DEST_DEV" ]; then
- umount $DEST_DIR
- rmdir $DEST_DIR
- fi
- exit 1
- else
- cp -pu $MNTDIR/boot/vmlinuz-*-generic $PXEDIR
- fi
- umount $MNTDIR
- rmdir $MNTDIR
-fi
-
-# Get generic kernel version
-KNAME=`basename $PXEDIR/vmlinuz-*-generic`
-KVER=${KNAME#vmlinuz-}
-cp -pu $PXEDIR/vmlinuz-$KVER $DEST_DIR/ubuntu
-cat >>$CFG <<EOF
-
-LABEL devstack
- MENU LABEL ^devstack
- MENU DEFAULT
- KERNEL /ubuntu/vmlinuz-$KVER
- APPEND initrd=/ubuntu/stack-initrd.gz ramdisk_size=2109600 root=/dev/ram0
-EOF
-
-# Get Ubuntu
-if [ -d $PXEDIR -a -r $PXEDIR/natty-base-initrd.gz ]; then
- cp -pu $PXEDIR/natty-base-initrd.gz $DEST_DIR/ubuntu
- cat >>$CFG <<EOF
-
-LABEL ubuntu
- MENU LABEL ^Ubuntu Natty
- KERNEL /ubuntu/vmlinuz-$KVER
- APPEND initrd=/ubuntu/natty-base-initrd.gz ramdisk_size=419600 root=/dev/ram0
-EOF
-fi
-
-# Local disk boot
-cat >>$CFG <<EOF
-
-LABEL local
- MENU LABEL ^Local disk
- LOCALBOOT 0
-EOF
-
-if [ -n "$DEST_DEV" ]; then
- umount $DEST_DIR
- rmdir $DEST_DIR
-fi
-
-trap - SIGHUP SIGINT SIGTERM SIGQUIT EXIT
diff --git a/tools/copy_dev_environment_to_uec.sh b/tools/copy_dev_environment_to_uec.sh
deleted file mode 100755
index 94a4926..0000000
--- a/tools/copy_dev_environment_to_uec.sh
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env bash
-
-# **copy_dev_environment_to_uec.sh**
-
-# Echo commands
-set -o xtrace
-
-# Exit on error to stop unexpected errors
-set -o errexit
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
-
-# Import common functions
-. $TOP_DIR/functions
-
-# Change dir to top of devstack
-cd $TOP_DIR
-
-# Source params
-source ./stackrc
-
-# Echo usage
-function usage {
- echo "Add stack user and keys"
- echo ""
- echo "Usage: $0 [full path to raw uec base image]"
-}
-
-# Make sure this is a raw image
-if ! qemu-img info $1 | grep -q "file format: raw"; then
- usage
- exit 1
-fi
-
-# Mount the image
-DEST=/opt/stack
-STAGING_DIR=/tmp/`echo $1 | sed "s/\//_/g"`.stage.user
-mkdir -p $STAGING_DIR
-umount $STAGING_DIR || true
-sleep 1
-mount -t ext4 -o loop $1 $STAGING_DIR
-mkdir -p $STAGING_DIR/$DEST
-
-# Create a stack user that is a member of the libvirtd group so that stack
-# is able to interact with libvirt.
-chroot $STAGING_DIR groupadd libvirtd || true
-chroot $STAGING_DIR useradd $STACK_USER -s /bin/bash -d $DEST -G libvirtd || true
-
-# Add a simple password - pass
-echo $STACK_USER:pass | chroot $STAGING_DIR chpasswd
-
-# Configure sudo
-( umask 226 && echo "$STACK_USER ALL=(ALL) NOPASSWD:ALL" \
- > $STAGING_DIR/etc/sudoers.d/50_stack_sh )
-
-# Copy over your ssh keys and env if desired
-cp_it ~/.ssh $STAGING_DIR/$DEST/.ssh
-cp_it ~/.ssh/id_rsa.pub $STAGING_DIR/$DEST/.ssh/authorized_keys
-cp_it ~/.gitconfig $STAGING_DIR/$DEST/.gitconfig
-cp_it ~/.vimrc $STAGING_DIR/$DEST/.vimrc
-cp_it ~/.bashrc $STAGING_DIR/$DEST/.bashrc
-
-# Copy devstack
-rm -rf $STAGING_DIR/$DEST/devstack
-cp_it . $STAGING_DIR/$DEST/devstack
-
-# Give stack ownership over $DEST so it may do the work needed
-chroot $STAGING_DIR chown -R $STACK_USER $DEST
-
-# Unmount
-umount $STAGING_DIR
diff --git a/tools/fixup_stuff.sh b/tools/fixup_stuff.sh
index f1dc76a..8b1e4df 100755
--- a/tools/fixup_stuff.sh
+++ b/tools/fixup_stuff.sh
@@ -124,6 +124,14 @@
if [[ $DISTRO =~ (rhel6) ]]; then
+ # install_pip.sh installs the latest setuptools over the packaged
+ # version. We can't really uninstall the packaged version if it
+ # is there, because it may remove other important things like
+ # cloud-init. Things work, but there can be an old egg file left
+ # around from the package that causes some really strange
+ # setuptools errors. Remove it, if it is there
+ sudo rm -f /usr/lib/python2.6/site-packages/setuptools-0.6*.egg-info
+
# If the ``dbus`` package was installed by DevStack dependencies the
# uuid may not be generated because the service was never started (PR#598200),
# causing Nova to stop later on complaining that ``/var/lib/dbus/machine-id``
diff --git a/tools/get_uec_image.sh b/tools/get_uec_image.sh
deleted file mode 100755
index 225742c..0000000
--- a/tools/get_uec_image.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/bin/bash
-
-# **get_uec_image.sh**
-
-# Download and prepare Ubuntu UEC images
-
-CACHEDIR=${CACHEDIR:-/opt/stack/cache}
-ROOTSIZE=${ROOTSIZE:-2000M}
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/..; pwd)
-
-# Import common functions
-. $TOP_DIR/functions
-
-# Exit on error to stop unexpected errors
-set -o errexit
-set -o xtrace
-
-function usage {
- echo "Usage: $0 - Download and prepare Ubuntu UEC images"
- echo ""
- echo "$0 [-r rootsize] release imagefile [kernel]"
- echo ""
- echo "-r size - root fs size (min 2000MB)"
- echo "release - Ubuntu release: lucid - quantal"
- echo "imagefile - output image file"
- echo "kernel - output kernel"
- exit 1
-}
-
-# Clean up any resources that may be in use
-function cleanup {
- set +o errexit
-
- # Mop up temporary files
- if [ -n "$IMG_FILE_TMP" -a -e "$IMG_FILE_TMP" ]; then
- rm -f $IMG_FILE_TMP
- fi
-
- # Kill ourselves to signal any calling process
- trap 2; kill -2 $$
-}
-
-while getopts hr: c; do
- case $c in
- h) usage
- ;;
- r) ROOTSIZE=$OPTARG
- ;;
- esac
-done
-shift `expr $OPTIND - 1`
-
-if [[ ! "$#" -eq "2" && ! "$#" -eq "3" ]]; then
- usage
-fi
-
-# Default args
-DIST_NAME=$1
-IMG_FILE=$2
-IMG_FILE_TMP=`mktemp $IMG_FILE.XXXXXX`
-KERNEL=$3
-
-case $DIST_NAME in
- saucy) ;;
- raring) ;;
- quantal) ;;
- precise) ;;
- *) echo "Unknown release: $DIST_NAME"
- usage
- ;;
-esac
-
-trap cleanup SIGHUP SIGINT SIGTERM SIGQUIT EXIT
-
-# Check dependencies
-if [ ! -x "`which qemu-img`" -o -z "`dpkg -l | grep cloud-utils`" ]; then
- # Missing KVM?
- apt_get install qemu-kvm cloud-utils
-fi
-
-# Find resize script
-RESIZE=`which resize-part-image || which uec-resize-image`
-if [ -z "$RESIZE" ]; then
- echo "resize tool from cloud-utils not found"
- exit 1
-fi
-
-# Get the UEC image
-UEC_NAME=$DIST_NAME-server-cloudimg-amd64
-if [ ! -d $CACHEDIR/$DIST_NAME ]; then
- mkdir -p $CACHEDIR/$DIST_NAME
-fi
-if [ ! -e $CACHEDIR/$DIST_NAME/$UEC_NAME.tar.gz ]; then
- (cd $CACHEDIR/$DIST_NAME && wget -N http://uec-images.ubuntu.com/$DIST_NAME/current/$UEC_NAME.tar.gz)
- (cd $CACHEDIR/$DIST_NAME && tar Sxvzf $UEC_NAME.tar.gz)
-fi
-
-$RESIZE $CACHEDIR/$DIST_NAME/$UEC_NAME.img ${ROOTSIZE} $IMG_FILE_TMP
-mv $IMG_FILE_TMP $IMG_FILE
-
-# Copy kernel to destination
-if [ -n "$KERNEL" ]; then
- cp -p $CACHEDIR/$DIST_NAME/*-vmlinuz-virtual $KERNEL
-fi
-
-trap - SIGHUP SIGINT SIGTERM SIGQUIT EXIT
diff --git a/tools/install_openvpn.sh b/tools/install_openvpn.sh
deleted file mode 100755
index 9a4f036..0000000
--- a/tools/install_openvpn.sh
+++ /dev/null
@@ -1,221 +0,0 @@
-#!/bin/bash
-
-# **install_openvpn.sh**
-
-# Install OpenVPN and generate required certificates
-#
-# install_openvpn.sh --client name
-# install_openvpn.sh --server [name]
-#
-# name is used on the CN of the generated cert, and the filename of
-# the configuration, certificate and key files.
-#
-# --server mode configures the host with a running OpenVPN server instance
-# --client mode creates a tarball of a client configuration for this server
-
-# Get config file
-if [ -e localrc ]; then
- . localrc
-fi
-if [ -e vpnrc ]; then
- . vpnrc
-fi
-
-# Do some IP manipulation
-function cidr2netmask {
- set -- $(( 5 - ($1 / 8) )) 255 255 255 255 $(( (255 << (8 - ($1 % 8))) & 255 )) 0 0 0
- if [[ $1 -gt 1 ]]; then
- shift $1
- else
- shift
- fi
- echo ${1-0}.${2-0}.${3-0}.${4-0}
-}
-
-FIXED_NET=`echo $FIXED_RANGE | cut -d'/' -f1`
-FIXED_CIDR=`echo $FIXED_RANGE | cut -d'/' -f2`
-FIXED_MASK=`cidr2netmask $FIXED_CIDR`
-
-# VPN Config
-VPN_SERVER=${VPN_SERVER:-`ifconfig eth0 | awk "/inet addr:/ { print \$2 }" | cut -d: -f2`} # 50.56.12.212
-VPN_PROTO=${VPN_PROTO:-tcp}
-VPN_PORT=${VPN_PORT:-6081}
-VPN_DEV=${VPN_DEV:-tap0}
-VPN_BRIDGE=${VPN_BRIDGE:-br100}
-VPN_BRIDGE_IF=${VPN_BRIDGE_IF:-$FLAT_INTERFACE}
-VPN_CLIENT_NET=${VPN_CLIENT_NET:-$FIXED_NET}
-VPN_CLIENT_MASK=${VPN_CLIENT_MASK:-$FIXED_MASK}
-VPN_CLIENT_DHCP="${VPN_CLIENT_DHCP:-net.1 net.254}"
-
-VPN_DIR=/etc/openvpn
-CA_DIR=$VPN_DIR/easy-rsa
-
-function usage {
- echo "$0 - OpenVPN install and certificate generation"
- echo ""
- echo "$0 --client name"
- echo "$0 --server [name]"
- echo ""
- echo " --server mode configures the host with a running OpenVPN server instance"
- echo " --client mode creates a tarball of a client configuration for this server"
- exit 1
-}
-
-if [ -z $1 ]; then
- usage
-fi
-
-# Install OpenVPN
-VPN_EXEC=`which openvpn`
-if [ -z "$VPN_EXEC" -o ! -x "$VPN_EXEC" ]; then
- apt-get install -y openvpn bridge-utils
-fi
-if [ ! -d $CA_DIR ]; then
- cp -pR /usr/share/doc/openvpn/examples/easy-rsa/2.0/ $CA_DIR
-fi
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=$(cd $TOOLS_DIR/.. && pwd)
-
-WEB_DIR=$TOP_DIR/../vpn
-if [[ ! -d $WEB_DIR ]]; then
- mkdir -p $WEB_DIR
-fi
-WEB_DIR=$(cd $TOP_DIR/../vpn && pwd)
-
-cd $CA_DIR
-source ./vars
-
-# Override the defaults
-export KEY_COUNTRY="US"
-export KEY_PROVINCE="TX"
-export KEY_CITY="SanAntonio"
-export KEY_ORG="Cloudbuilders"
-export KEY_EMAIL="rcb@lists.rackspace.com"
-
-if [ ! -r $CA_DIR/keys/dh1024.pem ]; then
- # Initialize a new CA
- $CA_DIR/clean-all
- $CA_DIR/build-dh
- $CA_DIR/pkitool --initca
- openvpn --genkey --secret $CA_DIR/keys/ta.key ## Build a TLS key
-fi
-
-function do_server {
- NAME=$1
- # Generate server certificate
- $CA_DIR/pkitool --server $NAME
-
- (cd $CA_DIR/keys;
- cp $NAME.crt $NAME.key ca.crt dh1024.pem ta.key $VPN_DIR
- )
- cat >$VPN_DIR/br-up <<EOF
-#!/bin/bash
-
-BR="$VPN_BRIDGE"
-TAP="\$1"
-
-if [[ ! -d /sys/class/net/\$BR ]]; then
- brctl addbr \$BR
-fi
-
-for t in \$TAP; do
- openvpn --mktun --dev \$t
- brctl addif \$BR \$t
- ifconfig \$t 0.0.0.0 promisc up
-done
-EOF
- chmod +x $VPN_DIR/br-up
- cat >$VPN_DIR/br-down <<EOF
-#!/bin/bash
-
-BR="$VPN_BRIDGE"
-TAP="\$1"
-
-for i in \$TAP; do
- brctl delif \$BR $t
- openvpn --rmtun --dev \$i
-done
-EOF
- chmod +x $VPN_DIR/br-down
- cat >$VPN_DIR/$NAME.conf <<EOF
-proto $VPN_PROTO
-port $VPN_PORT
-dev $VPN_DEV
-up $VPN_DIR/br-up
-down $VPN_DIR/br-down
-cert $NAME.crt
-key $NAME.key # This file should be kept secret
-ca ca.crt
-dh dh1024.pem
-duplicate-cn
-server-bridge $VPN_CLIENT_NET $VPN_CLIENT_MASK $VPN_CLIENT_DHCP
-ifconfig-pool-persist ipp.txt
-comp-lzo
-user nobody
-group nogroup
-persist-key
-persist-tun
-status openvpn-status.log
-EOF
- /etc/init.d/openvpn restart
-}
-
-function do_client {
- NAME=$1
- # Generate a client certificate
- $CA_DIR/pkitool $NAME
-
- TMP_DIR=`mktemp -d`
- (cd $CA_DIR/keys;
- cp -p ca.crt ta.key $NAME.key $NAME.crt $TMP_DIR
- )
- if [ -r $VPN_DIR/hostname ]; then
- HOST=`cat $VPN_DIR/hostname`
- else
- HOST=`hostname`
- fi
- cat >$TMP_DIR/$HOST.conf <<EOF
-proto $VPN_PROTO
-port $VPN_PORT
-dev $VPN_DEV
-cert $NAME.crt
-key $NAME.key # This file should be kept secret
-ca ca.crt
-client
-remote $VPN_SERVER $VPN_PORT
-resolv-retry infinite
-nobind
-user nobody
-group nogroup
-persist-key
-persist-tun
-comp-lzo
-verb 3
-EOF
- (cd $TMP_DIR; tar cf $WEB_DIR/$NAME.tar *)
- rm -rf $TMP_DIR
- echo "Client certificate and configuration is in $WEB_DIR/$NAME.tar"
-}
-
-# Process command line args
-case $1 in
- --client) if [ -z $2 ]; then
- usage
- fi
- do_client $2
- ;;
- --server) if [ -z $2 ]; then
- NAME=`hostname`
- else
- NAME=$2
- # Save for --client use
- echo $NAME >$VPN_DIR/hostname
- fi
- do_server $NAME
- ;;
- --clean) $CA_DIR/clean-all
- ;;
- *) usage
-esac
diff --git a/tools/install_pip.sh b/tools/install_pip.sh
index 150faaa..55ef93e 100755
--- a/tools/install_pip.sh
+++ b/tools/install_pip.sh
@@ -50,6 +50,25 @@
}
+function configure_pypi_alternative_url {
+ PIP_ROOT_FOLDER="$HOME/.pip"
+ PIP_CONFIG_FILE="$PIP_ROOT_FOLDER/pip.conf"
+ if [[ ! -d $PIP_ROOT_FOLDER ]]; then
+ echo "Creating $PIP_ROOT_FOLDER"
+ mkdir $PIP_ROOT_FOLDER
+ fi
+ if [[ ! -f $PIP_CONFIG_FILE ]]; then
+ echo "Creating $PIP_CONFIG_FILE"
+ touch $PIP_CONFIG_FILE
+ fi
+ if ! ini_has_option "$PIP_CONFIG_FILE" "global" "index-url"; then
+ #it means that the index-url does not exist
+ iniset "$PIP_CONFIG_FILE" "global" "index-url" "$PYPI_OVERRIDE"
+ fi
+
+}
+
+
# Show starting versions
get_versions
@@ -60,6 +79,10 @@
install_get_pip
+if [[ -n $PYPI_ALTERNATIVE_URL ]]; then
+ configure_pypi_alternative_url
+fi
+
pip_install -U setuptools
get_versions
diff --git a/tools/warm_apts_for_uec.sh b/tools/warm_apts_for_uec.sh
deleted file mode 100755
index c57fc2e..0000000
--- a/tools/warm_apts_for_uec.sh
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env bash
-
-# **warm_apts_for_uec.sh**
-
-# Echo commands
-set -o xtrace
-
-# Exit on error to stop unexpected errors
-set -o errexit
-
-# Keep track of the current directory
-TOOLS_DIR=$(cd $(dirname "$0") && pwd)
-TOP_DIR=`cd $TOOLS_DIR/..; pwd`
-
-# Change dir to top of devstack
-cd $TOP_DIR
-
-# Echo usage
-function usage {
- echo "Cache OpenStack dependencies on a uec image to speed up performance."
- echo ""
- echo "Usage: $0 [full path to raw uec base image]"
-}
-
-# Make sure this is a raw image
-if ! qemu-img info $1 | grep -q "file format: raw"; then
- usage
- exit 1
-fi
-
-# Make sure we are in the correct dir
-if [ ! -d files/apts ]; then
- echo "Please run this script from devstack/tools/"
- exit 1
-fi
-
-# Mount the image
-STAGING_DIR=/tmp/`echo $1 | sed "s/\//_/g"`.stage
-mkdir -p $STAGING_DIR
-umount $STAGING_DIR || true
-sleep 1
-mount -t ext4 -o loop $1 $STAGING_DIR
-
-# Make sure that base requirements are installed
-cp /etc/resolv.conf $STAGING_DIR/etc/resolv.conf
-
-# Perform caching on the base image to speed up subsequent runs
-chroot $STAGING_DIR apt-get update
-chroot $STAGING_DIR apt-get install -y --download-only `cat files/apts/* | grep NOPRIME | cut -d\# -f1`
-chroot $STAGING_DIR apt-get install -y --force-yes `cat files/apts/* | grep -v NOPRIME | cut -d\# -f1` || true
-
-# Unmount
-umount $STAGING_DIR
diff --git a/tools/xen/install_os_domU.sh b/tools/xen/install_os_domU.sh
index 44e8dc1..12e861e 100755
--- a/tools/xen/install_os_domU.sh
+++ b/tools/xen/install_os_domU.sh
@@ -207,6 +207,8 @@
-e "s,\(d-i mirror/http/hostname string\).*,\1 $UBUNTU_INST_HTTP_HOSTNAME,g" \
-e "s,\(d-i mirror/http/directory string\).*,\1 $UBUNTU_INST_HTTP_DIRECTORY,g" \
-e "s,\(d-i mirror/http/proxy string\).*,\1 $UBUNTU_INST_HTTP_PROXY,g" \
+ -e "s,\(d-i passwd/root-password password\).*,\1 $GUEST_PASSWORD,g" \
+ -e "s,\(d-i passwd/root-password-again password\).*,\1 $GUEST_PASSWORD,g" \
-i "${HTTP_SERVER_LOCATION}/devstackubuntupreseed.cfg"
fi
@@ -382,10 +384,16 @@
while ! ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "service devstack status | grep -q running"; do
sleep 10
done
- echo -n "devstack is running"
+ echo -n "devstack service is running, waiting for stack.sh to start logging..."
+
+ while ! ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "test -e /tmp/devstack/log/stack.log"; do
+ sleep 10
+ done
set -x
- # Watch devstack's output
+ # Watch devstack's output (which doesn't start until stack.sh is running,
+ # but wait for run.sh (which starts stack.sh) to exit as that is what
+ # hopefully writes the succeded cookie.
pid=`ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS pgrep run.sh`
ssh_no_check -q stack@$OS_VM_MANAGEMENT_ADDRESS "tail --pid $pid -n +1 -f /tmp/devstack/log/stack.log"
diff --git a/tools/xen/scripts/on_exit.sh b/tools/xen/scripts/on_exit.sh
index 2441e3d..2846dc4 100755
--- a/tools/xen/scripts/on_exit.sh
+++ b/tools/xen/scripts/on_exit.sh
@@ -3,7 +3,9 @@
set -e
set -o xtrace
-declare -a on_exit_hooks
+if [ -z "${on_exit_hooks:-}" ]; then
+ on_exit_hooks=()
+fi
on_exit()
{
diff --git a/unstack.sh b/unstack.sh
index a5e7b87..fe5fc77 100755
--- a/unstack.sh
+++ b/unstack.sh
@@ -122,9 +122,10 @@
stop_horizon
fi
-# Kill TLS proxies
+# Kill TLS proxies and cleanup certificates
if is_service_enabled tls-proxy; then
- killall stud
+ stop_tls_proxy
+ cleanup_CA
fi
SCSI_PERSIST_DIR=$CINDER_STATE_PATH/volumes/*