commit | 0ff314c01dc1184fc443a85f4110615f32ec8d90 | [log] [tgz] |
---|---|---|
author | Ian Wienand <iwienand@redhat.com> | Wed Jul 17 16:30:19 2013 +1000 |
committer | Ian Wienand <iwienand@redhat.com> | Wed Jul 17 21:01:43 2013 +1000 |
tree | e981698b9c1bc6569af8c5e42f4368c0b4b3f00c | |
parent | 7ad51b4a473ae464ca10bfac8aa6a6461a7c70cb [diff] |
Only create swift account if swift enabled Only call the swift account creation function if swift is enabled, otherwise the endpoints are created in keystone even though swift isn't running. This causes failures when tempest queries keystone and thinks swift is there; it starts running tests against it that fail with unhelpful "connection refused" errors. Change-Id: Icf08409c9443ec703e5f1da4531aa34c326f3642
DevStack is a set of scripts and utilities to quickly deploy an OpenStack cloud.
Read more at http://devstack.org (built from the gh-pages branch)
IMPORTANT: Be sure to carefully read stack.sh
and any other scripts you execute before you run them, as they install software and may alter your networking configuration. We strongly recommend that you run stack.sh
in a clean and disposable vm when you are first getting started.
If you would like to use Xenserver as the hypervisor, please refer to the instructions in ./tools/xen/README.md
.
The devstack master branch generally points to trunk versions of OpenStack components. For older, stable versions, look for branches named stable/[release] in the DevStack repo. For example, you can do the following to create a diablo OpenStack cloud:
git checkout stable/diablo ./stack.sh
You can also pick specific OpenStack project releases by setting the appropriate *_BRANCH
variables in localrc
(look in stackrc
for the default set). Usually just before a release there will be milestone-proposed branches that need to be tested::
GLANCE_REPO=https://github.com/openstack/glance.git GLANCE_BRANCH=milestone-proposed
Installing in a dedicated disposable vm is safer than installing on your dev machine! To start a dev cloud:
./stack.sh
When the script finishes executing, you should be able to access OpenStack endpoints, like so:
We also provide an environment file that you can use to interact with your cloud via CLI:
# source openrc file to load your environment with osapi and ec2 creds . openrc # list instances nova list
If the EC2 API is your cup-o-tea, you can create credentials and use euca2ools:
# source eucarc to generate EC2 credentials and set up the environment . eucarc # list instances using ec2 api euca-describe-instances
You can override environment variables used in stack.sh
by creating file name localrc
. It is likely that you will need to do this to tweak your networking configuration should you need to access your cloud from a different host.
Multiple database backends are available. The available databases are defined in the lib/databases directory. mysql
is the default database, choose a different one by putting the following in localrc
:
disable_service mysql enable_service postgresql
mysql
is the default database.
Multiple RPC backends are available. Currently, this includes RabbitMQ (default), Qpid, and ZeroMQ. Your backend of choice may be selected via the localrc
.
Note that selecting more than one RPC backend will result in a failure.
Example (ZeroMQ):
ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-qpid,zeromq"
Example (Qpid):
ENABLED_SERVICES="$ENABLED_SERVICES,-rabbit,-zeromq,qpid"
Swift is enabled by default configured with only one replica to avoid being IO/memory intensive on a small vm. When running with only one replica the account, container and object services will run directly in screen. The others services like replicator, updaters or auditor runs in background.
If you would like to disable Swift you can add this to your localrc
:
disable_service s-proxy s-object s-container s-account
If you want a minimal Swift install with only Swift and Keystone you can have this instead in your localrc
:
disable_all_services enable_service key mysql s-proxy s-object s-container s-account
If you only want to do some testing of a real normal swift cluster with multiple replicas you can do so by customizing the variable SWIFT_REPLICAS
in your localrc
(usually to 3).
If you are enabling swift3
in ENABLED_SERVICES
devstack will install the swift3 middleware emulation. Swift will be configured to act as a S3 endpoint for Keystone so effectively replacing the nova-objectstore
.
Only Swift proxy server is launched in the screen session all other services are started in background and managed by swift-init
tool.
Basic Setup
In order to enable Neutron a single node setup, you'll need the following settings in your localrc
:
disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron # Optional, to enable tempest configuration as part of devstack enable_service tempest
Then run stack.sh
as normal.
devstack supports adding specific Neutron configuration flags to the service, Open vSwitch plugin and LinuxBridge plugin configuration files. To make use of this feature, the following variables are defined and can be configured in your localrc
file:
Variable Name Config File Section Modified ------------------------------------------------------------------------------------- Q_SRV_EXTRA_OPTS Plugin `OVS` (for Open Vswitch) or `LINUX_BRIDGE` (for LinuxBridge) Q_AGENT_EXTRA_AGENT_OPTS Plugin AGENT Q_AGENT_EXTRA_SRV_OPTS Plugin `OVS` (for Open Vswitch) or `LINUX_BRIDGE` (for LinuxBridge) Q_SRV_EXTRA_DEFAULT_OPTS Service DEFAULT
An example of using the variables in your localrc
is below:
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472) Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
If tempest has been successfully configured, a basic set of smoke tests can be run as follows:
$ cd /opt/stack/tempest $ nosetests tempest/tests/network/test_network_basic_ops.py
A more interesting setup involves running multiple compute nodes, with Neutron networks connecting VMs on different compute nodes. You should run at least one "controller node", which should have a stackrc
that includes at least:
disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service neutron
You likely want to change your localrc
to run a scheduler that will balance VMs across hosts:
SCHEDULER=nova.scheduler.simple.SimpleScheduler
You can then run many compute nodes, each of which should have a stackrc
which includes the following, with the IP address of the above controller node:
ENABLED_SERVICES=n-cpu,rabbit,g-api,neutron,q-agt SERVICE_HOST=[IP of controller node] MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST Q_HOST=$SERVICE_HOST MATCHMAKER_REDIS_HOST=$SERVICE_HOST
Cells is a new scaling option with a full spec at http://wiki.openstack.org/blueprint-nova-compute-cells.
To setup a cells environment add the following to your localrc
:
enable_service n-cell enable_service n-api-meta MULTI_HOST=True # The following have not been tested with cells, they may or may not work. disable_service n-obj disable_service cinder disable_service c-sch disable_service c-api disable_service c-vol disable_service n-xvnc
Be aware that there are some features currently missing in cells, one notable one being security groups.