Tom Weininger | 1516997 | 2022-09-14 17:16:00 +0200 | [diff] [blame] | 1 | Devstack with Octavia Load Balancing |
| 2 | ==================================== |
| 3 | |
| 4 | Starting with the OpenStack Pike release, Octavia is now a standalone service |
| 5 | providing load balancing services for OpenStack. |
| 6 | |
| 7 | This guide will show you how to create a devstack with `Octavia API`_ enabled. |
| 8 | |
| 9 | .. _Octavia API: https://docs.openstack.org/api-ref/load-balancer/v2/index.html |
| 10 | |
| 11 | Phase 1: Create DevStack + 2 nova instances |
| 12 | -------------------------------------------- |
| 13 | |
| 14 | First, set up a VM of your choice with at least 8 GB RAM and 16 GB disk space, |
| 15 | make sure it is updated. Install git and any other developer tools you find |
| 16 | useful. |
| 17 | |
| 18 | Install devstack:: |
| 19 | |
| 20 | git clone https://opendev.org/openstack/devstack |
| 21 | cd devstack/tools |
| 22 | sudo ./create-stack-user.sh |
| 23 | cd ../.. |
| 24 | sudo mv devstack /opt/stack |
| 25 | sudo chown -R stack.stack /opt/stack/devstack |
| 26 | |
| 27 | This will clone the current devstack code locally, then setup the "stack" |
| 28 | account that devstack services will run under. Finally, it will move devstack |
| 29 | into its default location in /opt/stack/devstack. |
| 30 | |
| 31 | Edit your ``/opt/stack/devstack/local.conf`` to look like:: |
| 32 | |
| 33 | [[local|localrc]] |
| 34 | # ===== BEGIN localrc ===== |
| 35 | DATABASE_PASSWORD=password |
| 36 | ADMIN_PASSWORD=password |
| 37 | SERVICE_PASSWORD=password |
| 38 | SERVICE_TOKEN=password |
| 39 | RABBIT_PASSWORD=password |
| 40 | GIT_BASE=https://opendev.org |
| 41 | # Optional settings: |
| 42 | # OCTAVIA_AMP_BASE_OS=centos |
| 43 | # OCTAVIA_AMP_DISTRIBUTION_RELEASE_ID=9-stream |
| 44 | # OCTAVIA_AMP_IMAGE_SIZE=3 |
| 45 | # OCTAVIA_LB_TOPOLOGY=ACTIVE_STANDBY |
| 46 | # OCTAVIA_ENABLE_AMPHORAV2_JOBBOARD=True |
| 47 | # LIBS_FROM_GIT+=octavia-lib, |
| 48 | # Enable Logging |
| 49 | LOGFILE=$DEST/logs/stack.sh.log |
| 50 | VERBOSE=True |
| 51 | LOG_COLOR=True |
| 52 | enable_service rabbit |
| 53 | enable_plugin neutron $GIT_BASE/openstack/neutron |
| 54 | # Octavia supports using QoS policies on the VIP port: |
| 55 | enable_service q-qos |
| 56 | enable_service placement-api placement-client |
| 57 | # Octavia services |
| 58 | enable_plugin octavia $GIT_BASE/openstack/octavia master |
| 59 | enable_plugin octavia-dashboard $GIT_BASE/openstack/octavia-dashboard |
| 60 | enable_plugin ovn-octavia-provider $GIT_BASE/openstack/ovn-octavia-provider |
| 61 | enable_plugin octavia-tempest-plugin $GIT_BASE/openstack/octavia-tempest-plugin |
| 62 | enable_service octavia o-api o-cw o-hm o-hk o-da |
| 63 | # If you are enabling barbican for TLS offload in Octavia, include it here. |
| 64 | # enable_plugin barbican $GIT_BASE/openstack/barbican |
| 65 | # enable_service barbican |
| 66 | # Cinder (optional) |
| 67 | disable_service c-api c-vol c-sch |
| 68 | # Tempest |
| 69 | enable_service tempest |
| 70 | # ===== END localrc ===== |
| 71 | |
| 72 | .. note:: |
| 73 | For best performance it is highly recommended to use KVM |
| 74 | virtualization instead of QEMU. |
| 75 | Also make sure nested virtualization is enabled as documented in |
| 76 | :ref:`the respective guide <kvm_nested_virt>`. |
| 77 | By adding ``LIBVIRT_CPU_MODE="host-passthrough"`` to your |
| 78 | ``local.conf`` you enable the guest VMs to make use of all features your |
| 79 | host's CPU provides. |
| 80 | |
| 81 | Run stack.sh and do some sanity checks:: |
| 82 | |
| 83 | sudo su - stack |
| 84 | cd /opt/stack/devstack |
| 85 | ./stack.sh |
| 86 | . ./openrc |
| 87 | |
| 88 | openstack network list # should show public and private networks |
| 89 | |
| 90 | Create two nova instances that we can use as test http servers:: |
| 91 | |
| 92 | # create nova instances on private network |
| 93 | openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1 |
| 94 | openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node2 |
| 95 | openstack server list # should show the nova instances just created |
| 96 | |
| 97 | # add secgroup rules to allow ssh etc.. |
| 98 | openstack security group rule create default --protocol icmp |
| 99 | openstack security group rule create default --protocol tcp --dst-port 22:22 |
| 100 | openstack security group rule create default --protocol tcp --dst-port 80:80 |
| 101 | |
| 102 | Set up a simple web server on each of these instances. One possibility is to use |
| 103 | the `Golang test server`_ that is used by the Octavia project for CI testing |
| 104 | as well. |
| 105 | Copy the binary to your instances and start it as shown below |
| 106 | (username 'cirros', password 'gocubsgo'):: |
| 107 | |
| 108 | INST_IP=<instance IP> |
| 109 | scp -O test_server.bin cirros@${INST_IP}: |
| 110 | ssh -f cirros@${INST_IP} ./test_server.bin -id ${INST_IP} |
| 111 | |
| 112 | When started this way the test server will respond to HTTP requests with |
| 113 | its own IP. |
| 114 | |
| 115 | Phase 2: Create your load balancer |
| 116 | ---------------------------------- |
| 117 | |
| 118 | Create your load balancer:: |
| 119 | |
| 120 | openstack loadbalancer create --wait --name lb1 --vip-subnet-id private-subnet |
| 121 | openstack loadbalancer listener create --wait --protocol HTTP --protocol-port 80 --name listener1 lb1 |
| 122 | openstack loadbalancer pool create --wait --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 |
| 123 | openstack loadbalancer healthmonitor create --wait --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1 |
| 124 | openstack loadbalancer member create --wait --subnet-id private-subnet --address <web server 1 address> --protocol-port 80 pool1 |
| 125 | openstack loadbalancer member create --wait --subnet-id private-subnet --address <web server 2 address> --protocol-port 80 pool1 |
| 126 | |
| 127 | Please note: The <web server # address> fields are the IP addresses of the nova |
| 128 | servers created in Phase 1. |
| 129 | Also note, using the API directly you can do all of the above commands in one |
| 130 | API call. |
| 131 | |
| 132 | Phase 3: Test your load balancer |
| 133 | -------------------------------- |
| 134 | |
| 135 | :: |
| 136 | |
| 137 | openstack loadbalancer show lb1 # Note the vip_address |
| 138 | curl http://<vip_address> |
| 139 | curl http://<vip_address> |
| 140 | |
| 141 | This should show the "Welcome to <IP>" message from each member server. |
| 142 | |
| 143 | |
| 144 | .. _Golang test server: https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/octavia_tempest_plugin/contrib/test_server |