commit | a1701fabcf8593bc8c555154cb2b85ef6fd5bba0 | [log] [tgz] |
---|---|---|
author | Al Miller <al.miller@hp.com> | Fri Feb 20 08:10:41 2015 -0800 |
committer | Al Miller <al.miller@hp.com> | Fri Feb 20 08:22:25 2015 -0800 |
tree | dd548c45e2f6d0676f8bc9363862ebc82627b0ff | |
parent | aa8d31ac8b6a69b40569f7d906b8217ac6612c2d [diff] |
clean.sh needs to call "run_phase clean" for external plugins Change-Id: I67b970992479e50dc054f8c4a77a20e724e3e305
DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud.
Read more at http://devstack.org.
IMPORTANT: Be sure to carefully read stack.sh
and any other scripts you execute before you run them, as they install software and will alter your networking configuration. We strongly recommend that you run stack.sh
in a clean and disposable vm when you are first getting started.
The DevStack master branch generally points to trunk versions of OpenStack components. For older, stable versions, look for branches named stable/[release] in the DevStack repo. For example, you can do the following to create a juno OpenStack cloud:
git checkout stable/juno ./stack.sh
You can also pick specific OpenStack project releases by setting the appropriate *_BRANCH
variables in the localrc
section of local.conf
(look in stackrc
for the default set). Usually just before a release there will be milestone-proposed branches that need to be tested::
GLANCE_REPO=git://git.openstack.org/openstack/glance.git GLANCE_BRANCH=milestone-proposed
Installing in a dedicated disposable VM is safer than installing on your dev machine! Plus you can pick one of the supported Linux distros for your VM. To start a dev cloud run the following NOT AS ROOT (see DevStack Execution Environment below for more on user accounts):
./stack.sh
When the script finishes executing, you should be able to access OpenStack endpoints, like so:
We also provide an environment file that you can use to interact with your cloud via CLI:
# source openrc file to load your environment with OpenStack CLI creds . openrc # list instances nova list
If the EC2 API is your cup-o-tea, you can create credentials and use euca2ools:
# source eucarc to generate EC2 credentials and set up the environment . eucarc # list instances using ec2 api euca-describe-instances
DevStack runs rampant over the system it runs on, installing things and uninstalling other things. Running this on a system you care about is a recipe for disappointment, or worse. Alas, we're all in the virtualization business here, so run it in a VM. And take advantage of the snapshot capabilities of your hypervisor of choice to reduce testing cycle times. You might even save enough time to write one more feature before the next feature freeze...
stack.sh
needs to have root access for a lot of tasks, but uses sudo
for all of those tasks. However, it needs to be not-root for most of its work and for all of the OpenStack services. stack.sh
specifically does not run if started as root.
This is a recent change (Oct 2013) from the previous behaviour of automatically creating a stack
user. Automatically creating user accounts is not the right response to running as root, so that bit is now an explicit step using tools/create-stack-user.sh
. Run that (as root!) or just check it out to see what DevStack's expectations are for the account it runs under. Many people simply use their usual login (the default 'ubuntu' login on a UEC image for example).
You can override environment variables used in stack.sh
by creating file name local.conf
with a localrc
section as shown below. It is likely that you will need to do this to tweak your networking configuration should you need to access your cloud from a different host.
[[local|localrc]] VARIABLE=value
See the Local Configuration section below for more details.
Multiple database backends are available. The available databases are defined in the lib/databases directory. mysql
is the default database, choose a different one by putting the following in the localrc
section:
disable_service mysql enable_service postgresql
mysql
is the default database.
Multiple RPC backends are available. Currently, this includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of choice may be selected via the localrc
section.
Note that selecting more than one RPC backend will result in a failure.
Example (ZeroMQ):
ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-qpid,zeromq"
Example (Qpid):
ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-zeromq,qpid"
Apache web server can be enabled for wsgi services that support being deployed under HTTPD + mod_wsgi. By default, services that recommend running under HTTPD + mod_wsgi are deployed under Apache. To use an alternative deployment strategy (e.g. eventlet) for services that support an alternative to HTTPD + mod_wsgi set ENABLE_HTTPD_MOD_WSGI_SERVICES
to False
in your local.conf
.
Each service that can be run under HTTPD + mod_wsgi also has an override toggle available that can be set in your local.conf
.
Keystone is run under HTTPD + mod_wsgi by default.
Example (Keystone):
KEYSTONE_USE_MOD_WSGI="True"
Example (Swift):
SWIFT_USE_MOD_WSGI="True"
Swift is disabled by default. When enabled, it is configured with only one replica to avoid being IO/memory intensive on a small vm. When running with only one replica the account, container and object services will run directly in screen. The others services like replicator, updaters or auditor runs in background.
If you would like to enable Swift you can add this to your localrc
section:
enable_service s-proxy s-object s-container s-account
If you want a minimal Swift install with only Swift and Keystone you can have this instead in your localrc
section:
disable_all_services enable_service key mysql s-proxy s-object s-container s-account
If you only want to do some testing of a real normal swift cluster with multiple replicas you can do so by customizing the variable SWIFT_REPLICAS
in your localrc
section (usually to 3).
If you are enabling swift3
in ENABLED_SERVICES
DevStack will install the swift3 middleware emulation. Swift will be configured to act as a S3 endpoint for Keystone so effectively replacing the nova-objectstore
.
Only Swift proxy server is launched in the screen session all other services are started in background and managed by swift-init
tool.
Basic Setup
In order to enable Neutron a single node setup, you'll need the following settings in your local.conf
:
disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service q-metering # Optional, to enable tempest configuration as part of DevStack enable_service tempest
Then run stack.sh
as normal.
DevStack supports setting specific Neutron configuration flags to the service, Open vSwitch plugin and LinuxBridge plugin configuration files. To make use of this feature, the settings can be added to local.conf
. The old Q_XXX_EXTRA_XXX_OPTS
variables are deprecated and will be removed in the near future. The local.conf
headers for the replacements are:
Q_SRV_EXTRA_OPTS
:
[[post-config|/$Q_PLUGIN_CONF_FILE]] [linuxbridge] # or [ovs]
Example extra config in local.conf
:
[[post-config|/$Q_PLUGIN_CONF_FILE]] [agent] tunnel_type=vxlan vxlan_udp_port=8472 [[post-config|$NEUTRON_CONF]] [DEFAULT] tenant_network_type=vxlan
DevStack also supports configuring the Neutron ML2 plugin. The ML2 plugin can run with the OVS, LinuxBridge, or Hyper-V agents on compute hosts. This is a simple way to configure the ml2 plugin:
# VLAN configuration Q_PLUGIN=ml2 ENABLE_TENANT_VLANS=True # GRE tunnel configuration Q_PLUGIN=ml2 ENABLE_TENANT_TUNNELS=True # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan
The above will default in DevStack to using the OVS on each compute host. To change this, set the Q_AGENT
variable to the agent you want to run (e.g. linuxbridge).
Variable Name Notes ---------------------------------------------------------------------------- Q_AGENT This specifies which agent to run with the ML2 Plugin (either `openvswitch` or `linuxbridge`). Q_ML2_PLUGIN_MECHANISM_DRIVERS The ML2 MechanismDrivers to load. The default is none. Note, ML2 will work with the OVS and LinuxBridge agents by default. Q_ML2_PLUGIN_TYPE_DRIVERS The ML2 TypeDrivers to load. Defaults to all available TypeDrivers. Q_ML2_PLUGIN_GRE_TYPE_OPTIONS GRE TypeDriver options. Defaults to none. Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS VXLAN TypeDriver options. Defaults to none. Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS VLAN TypeDriver options. Defaults to none.
Heat is enabled by default (see stackrc
file). To disable it explicitly you'll need the following settings in your localrc
section:
disable_service heat h-api h-api-cfn h-api-cw h-eng
Heat can also run in standalone mode, and be configured to orchestrate on an external OpenStack cloud. To launch only Heat in standalone mode you'll need the following settings in your localrc
section:
disable_all_services enable_service rabbit mysql heat h-api h-api-cfn h-api-cw h-eng HEAT_STANDALONE=True KEYSTONE_SERVICE_HOST=... KEYSTONE_AUTH_HOST=...
If tempest has been successfully configured, a basic set of smoke tests can be run as follows:
$ cd /opt/stack/tempest $ nosetests tempest/scenario/test_network_basic_ops.py
If you would like to use Xenserver as the hypervisor, please refer to the instructions in ./tools/xen/README.md
.
DevStack has a hook mechanism to call out to a dispatch script at specific points in the execution of stack.sh
, unstack.sh
and clean.sh
. This allows upper-layer projects, especially those that the lower layer projects have no dependency on, to be added to DevStack without modifying the core scripts. Tempest is built this way as an example of how to structure the dispatch script, see extras.d/80-tempest.sh
. See extras.d/README.md
for more information.
A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes. You should run at least one "controller node", which should have a stackrc
that includes at least:
disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron
You likely want to change your localrc
section to run a scheduler that will balance VMs across hosts:
SCHEDULER=nova.scheduler.simple.SimpleScheduler
You can then run many compute nodes, each of which should have a stackrc
which includes the following, with the IP address of the above controller node:
ENABLED_SERVICES=n-cpu,rabbit,g-api,neutron,q-agt SERVICE_HOST=[IP of controller node] MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST MATCHMAKER_REDIS_HOST=$SERVICE_HOST
We want to setup two devstack (RegionOne and RegionTwo) with shared keystone (same users and services) and horizon. Keystone and Horizon will be located in RegionOne. Full spec is available at: https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat.
In RegionOne:
REGION_NAME=RegionOne
In RegionTwo:
disable_service horizon KEYSTONE_SERVICE_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE> KEYSTONE_AUTH_HOST=<KEYSTONE_IP_ADDRESS_FROM_REGION_ONE> REGION_NAME=RegionTwo
Cells is a new scaling option with a full spec at: http://wiki.openstack.org/blueprint-nova-compute-cells.
To setup a cells environment add the following to your localrc
section:
enable_service n-cell
Be aware that there are some features currently missing in cells, one notable one being security groups. The exercises have been patched to disable functionality not supported by cells.
Historically DevStack has used localrc
to contain all local configuration and customizations. More and more of the configuration variables available for DevStack are passed-through to the individual project configuration files. The old mechanism for this required specific code for each file and did not scale well. This is handled now by a master local configuration file.
The new config file local.conf
is an extended-INI format that introduces a new meta-section header that provides some additional information such as a phase name and destination config filename:
[[ <phase> | <config-file-name> ]]
where <phase>
is one of a set of phase names defined by stack.sh
and <config-file-name>
is the configuration filename. The filename is eval'ed in the stack.sh
context so all environment variables are available and may be used. Using the project config file variables in the header is strongly suggested (see the NOVA_CONF
example below). If the path of the config file does not exist it is skipped.
The defined phases are:
localrc
from local.conf
before stackrc
is sourcedextra.d
are executedextra.d
are executedThe file is processed strictly in sequence; meta-sections may be specified more than once but if any settings are duplicated the last to appear in the file will be used.
[[post-config|$NOVA_CONF]] [DEFAULT] use_syslog = True [osapi_v3] enabled = False
A specific meta-section local|localrc
is used to provide a default localrc
file (actually .localrc.auto
). This allows all custom settings for DevStack to be contained in a single file. If localrc
exists it will be used instead to preserve backward-compatibility.
[[local|localrc]] FIXED_RANGE=10.254.1.0/24 ADMIN_PASSWORD=speciale LOGFILE=$DEST/logs/stack.sh.log
Note that Q_PLUGIN_CONF_FILE
is unique in that it is assumed to NOT start with a /
(slash) character. A slash will need to be added:
[[post-config|/$Q_PLUGIN_CONF_FILE]]