Merge "[Doc] Fix Glance image size limit command"
diff --git a/.zuul.yaml b/.zuul.yaml
index 6ad7148..56acb37 100644
--- a/.zuul.yaml
+++ b/.zuul.yaml
@@ -82,7 +82,7 @@
name: devstack-single-node-fedora-latest
nodes:
- name: controller
- label: fedora-35
+ label: fedora-36
groups:
- name: tempest
nodes:
@@ -99,6 +99,16 @@
- controller
- nodeset:
+ name: devstack-single-node-rockylinux-9
+ nodes:
+ - name: controller
+ label: rockylinux-9
+ groups:
+ - name: tempest
+ nodes:
+ - controller
+
+- nodeset:
name: openstack-two-node
nodes:
- name: controller
@@ -335,7 +345,6 @@
required-projects:
- opendev.org/openstack/devstack
roles:
- - zuul: opendev.org/openstack/devstack-gate
- zuul: opendev.org/openstack/openstack-zuul-jobs
vars:
devstack_localrc:
@@ -460,6 +469,7 @@
dstat: false
etcd3: true
memory_tracker: true
+ file_tracker: true
mysql: true
rabbit: true
group-vars:
@@ -468,6 +478,7 @@
# Shared services
dstat: false
memory_tracker: true
+ file_tracker: true
devstack_localrc:
# Multinode specific settings
HOST_IP: "{{ hostvars[inventory_hostname]['nodepool']['private_ipv4'] }}"
@@ -535,6 +546,7 @@
dstat: false
etcd3: true
memory_tracker: true
+ file_tracker: true
mysql: true
rabbit: true
tls-proxy: true
@@ -584,6 +596,7 @@
# Shared services
dstat: false
memory_tracker: true
+ file_tracker: true
tls-proxy: true
# Nova services
n-cpu: true
@@ -665,6 +678,17 @@
description: Debian Bullseye platform test
nodeset: devstack-single-node-debian-bullseye
timeout: 9000
+ # TODO(danms) n-v until the known issue is resolved
+ voting: false
+ vars:
+ configure_swap_size: 4096
+
+- job:
+ name: devstack-platform-rocky-blue-onyx
+ parent: tempest-full-py3
+ description: Rocky Linux 9 Blue Onyx platform test
+ nodeset: devstack-single-node-rockylinux-9
+ timeout: 9000
vars:
configure_swap_size: 4096
@@ -676,9 +700,6 @@
timeout: 9000
vars:
configure_swap_size: 4096
- devstack_services:
- # Horizon doesn't like py310
- horizon: false
- job:
name: devstack-platform-ubuntu-jammy-ovn-source
@@ -706,8 +727,6 @@
Q_ML2_PLUGIN_MECHANISM_DRIVERS: openvswitch
Q_ML2_TENANT_NETWORK_TYPE: vxlan
devstack_services:
- # Horizon doesn't like py310
- horizon: false
# Disable OVN services
ovn-northd: false
ovn-controller: false
@@ -752,10 +771,6 @@
voting: false
vars:
configure_swap_size: 4096
- # Python 3.10 dependency issues; see
- # https://bugs.launchpad.net/horizon/+bug/1960204
- devstack_services:
- horizon: false
- job:
name: devstack-platform-fedora-latest-virt-preview
@@ -844,6 +859,7 @@
- devstack-platform-fedora-latest
- devstack-platform-centos-9-stream
- devstack-platform-debian-bullseye
+ - devstack-platform-rocky-blue-onyx
- devstack-platform-ubuntu-jammy
- devstack-platform-ubuntu-jammy-ovn-source
- devstack-platform-ubuntu-jammy-ovs
diff --git a/doc/source/configuration.rst b/doc/source/configuration.rst
index d0f2b02..a83b2de 100644
--- a/doc/source/configuration.rst
+++ b/doc/source/configuration.rst
@@ -181,6 +181,9 @@
If the ``*_PASSWORD`` variables are not set here you will be prompted to
enter values for them by ``stack.sh``.
+.. warning:: Only use alphanumeric characters in your passwords, as some
+ services fail to work when using special characters.
+
The network ranges must not overlap with any networks in use on the
host. Overlap is not uncommon as RFC-1918 'private' ranges are commonly
used for both the local networking and Nova's fixed and floating ranges.
@@ -636,7 +639,7 @@
::
$ cd /opt/stack/tempest
- $ tox -efull tempest.scenario.test_network_basic_ops
+ $ tox -e smoke
By default tempest is downloaded and the config file is generated, but the
tempest package is not installed in the system's global site-packages (the
@@ -669,6 +672,35 @@
or ``CINDER_QUOTA_SNAPSHOTS`` to the desired value. (The default for
each is 10.)
+DevStack's Cinder LVM configuration module currently supports both iSCSI and
+NVMe connections, and we can choose which one to use with options
+``CINDER_TARGET_HELPER``, ``CINDER_TARGET_PROTOCOL``, ``CINDER_TARGET_PREFIX``,
+and ``CINDER_TARGET_PORT``.
+
+Defaults use iSCSI with the LIO target manager::
+
+ CINDER_TARGET_HELPER="lioadm"
+ CINDER_TARGET_PROTOCOL="iscsi"
+ CINDER_TARGET_PREFIX="iqn.2010-10.org.openstack:"
+ CINDER_TARGET_PORT=3260
+
+Additionally there are 3 supported transport protocols for NVMe,
+``nvmet_rdma``, ``nvmet_tcp``, and ``nvmet_fc``, and when the ``nvmet`` target
+is selected the protocol, prefix, and port defaults will change to more
+sensible defaults for NVMe::
+
+ CINDER_TARGET_HELPER="nvmet"
+ CINDER_TARGET_PROTOCOL="nvmet_rdma"
+ CINDER_TARGET_PREFIX="nvme-subsystem-1"
+ CINDER_TARGET_PORT=4420
+
+When selecting the RDMA transport protocol DevStack will create on Cinder nodes
+a Software RoCE device on top of the ``HOST_IP_IFACE`` and if it is not defined
+then on top of the interface with IP address ``HOST_IP`` or ``HOST_IPV6``.
+
+This Soft-RoCE device will always be created on the Nova compute side since we
+cannot tell beforehand whether there will be an RDMA connection or not.
+
Keystone
~~~~~~~~
diff --git a/doc/source/contributor/contributing.rst b/doc/source/contributor/contributing.rst
index 4de238f..8b5a85b 100644
--- a/doc/source/contributor/contributing.rst
+++ b/doc/source/contributor/contributing.rst
@@ -42,8 +42,9 @@
~~~~~~~~~~~~~~~~~~~~~~~~~
All changes proposed to the Devstack require two ``Code-Review +2`` votes from
Devstack core reviewers before one of the core reviewers can approve the patch
-by giving ``Workflow +1`` vote. One exception is for patches to unblock the gate
-which can be approved by single core reviewers.
+by giving ``Workflow +1`` vote. There are 2 exceptions, approving patches to
+unblock the gate and patches that do not relate to the Devstack's core logic,
+like for example old job cleanups, can be approved by single core reviewers.
Project Team Lead Duties
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/debugging.rst b/doc/source/debugging.rst
index fd0d9cd..3ca0ad9 100644
--- a/doc/source/debugging.rst
+++ b/doc/source/debugging.rst
@@ -20,6 +20,12 @@
falling (i.e. processes are consuming memory). It also provides
output showing locked (unswappable) memory.
+file_tracker
+------------
+
+The ``file_tracker`` service periodically monitors the number of
+open files in the system.
+
tcpdump
-------
diff --git a/doc/source/guides.rst b/doc/source/guides.rst
index e7ec629..e7b46b6 100644
--- a/doc/source/guides.rst
+++ b/doc/source/guides.rst
@@ -20,7 +20,7 @@
guides/neutron
guides/devstack-with-nested-kvm
guides/nova
- guides/devstack-with-lbaas-v2
+ guides/devstack-with-octavia
guides/devstack-with-ldap
All-In-One Single VM
@@ -69,10 +69,10 @@
Guide to working with nova features :doc:`Nova and devstack <guides/nova>`.
-Configure Load-Balancer Version 2
------------------------------------
+Configure Octavia
+-----------------
-Guide on :doc:`Configure Load-Balancer Version 2 <guides/devstack-with-lbaas-v2>`.
+Guide on :doc:`Configure Octavia <guides/devstack-with-octavia>`.
Deploying DevStack with LDAP
----------------------------
diff --git a/doc/source/guides/devstack-with-lbaas-v2.rst b/doc/source/guides/devstack-with-lbaas-v2.rst
deleted file mode 100644
index 5d96ca7..0000000
--- a/doc/source/guides/devstack-with-lbaas-v2.rst
+++ /dev/null
@@ -1,145 +0,0 @@
-Devstack with Octavia Load Balancing
-====================================
-
-Starting with the OpenStack Pike release, Octavia is now a standalone service
-providing load balancing services for OpenStack.
-
-This guide will show you how to create a devstack with `Octavia API`_ enabled.
-
-.. _Octavia API: https://docs.openstack.org/api-ref/load-balancer/v2/index.html
-
-Phase 1: Create DevStack + 2 nova instances
---------------------------------------------
-
-First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space,
-make sure it is updated. Install git and any other developer tools you find
-useful.
-
-Install devstack
-
-::
-
- git clone https://opendev.org/openstack/devstack
- cd devstack/tools
- sudo ./create-stack-user.sh
- cd ../..
- sudo mv devstack /opt/stack
- sudo chown -R stack.stack /opt/stack/devstack
-
-This will clone the current devstack code locally, then setup the "stack"
-account that devstack services will run under. Finally, it will move devstack
-into its default location in /opt/stack/devstack.
-
-Edit your ``/opt/stack/devstack/local.conf`` to look like
-
-::
-
- [[local|localrc]]
- enable_plugin octavia https://opendev.org/openstack/octavia
- # If you are enabling horizon, include the octavia dashboard
- # enable_plugin octavia-dashboard https://opendev.org/openstack/octavia-dashboard.git
- # If you are enabling barbican for TLS offload in Octavia, include it here.
- # enable_plugin barbican https://opendev.org/openstack/barbican
-
- # ===== BEGIN localrc =====
- DATABASE_PASSWORD=password
- ADMIN_PASSWORD=password
- SERVICE_PASSWORD=password
- SERVICE_TOKEN=password
- RABBIT_PASSWORD=password
- # Enable Logging
- LOGFILE=$DEST/logs/stack.sh.log
- VERBOSE=True
- LOG_COLOR=True
- # Pre-requisite
- ENABLED_SERVICES=rabbit,mysql,key
- # Horizon - enable for the OpenStack web GUI
- # ENABLED_SERVICES+=,horizon
- # Nova
- ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-api-meta,n-sproxy
- ENABLED_SERVICES+=,placement-api,placement-client
- # Glance
- ENABLED_SERVICES+=,g-api
- # Neutron
- ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron
- ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
- # Cinder
- ENABLED_SERVICES+=,c-api,c-vol,c-sch
- # Tempest
- ENABLED_SERVICES+=,tempest
- # Barbican - Optionally used for TLS offload in Octavia
- # ENABLED_SERVICES+=,barbican
- # ===== END localrc =====
-
-Run stack.sh and do some sanity checks
-
-::
-
- sudo su - stack
- cd /opt/stack/devstack
- ./stack.sh
- . ./openrc
-
- openstack network list # should show public and private networks
-
-Create two nova instances that we can use as test http servers:
-
-::
-
- #create nova instances on private network
- openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1
- openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node2
- openstack server list # should show the nova instances just created
-
- #add secgroup rules to allow ssh etc..
- openstack security group rule create default --protocol icmp
- openstack security group rule create default --protocol tcp --dst-port 22:22
- openstack security group rule create default --protocol tcp --dst-port 80:80
-
-Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)' or 'gocubsgo') and run
-
-::
-
- MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
- while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
-
-Phase 2: Create your load balancer
-----------------------------------
-
-Make sure you have the 'openstack loadbalancer' commands:
-
-::
-
- pip install python-octaviaclient
-
-Create your load balancer:
-
-::
-
- openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
- openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.
- openstack loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1
- openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.
- openstack loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
- openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.
- openstack loadbalancer healthmonitor create --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1
- openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.
- openstack loadbalancer member create --subnet-id private-subnet --address <web server 1 address> --protocol-port 80 pool1
- openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.
- openstack loadbalancer member create --subnet-id private-subnet --address <web server 2 address> --protocol-port 80 pool1
-
-Please note: The <web server # address> fields are the IP addresses of the nova
-servers created in Phase 1.
-Also note, using the API directly you can do all of the above commands in one
-API call.
-
-Phase 3: Test your load balancer
---------------------------------
-
-::
-
- openstack loadbalancer show lb1 # Note the vip_address
- curl http://<vip_address>
- curl http://<vip_address>
-
-This should show the "Welcome to <IP>" message from each member server.
diff --git a/doc/source/guides/devstack-with-nested-kvm.rst b/doc/source/guides/devstack-with-nested-kvm.rst
index 3732f06..ba483e9 100644
--- a/doc/source/guides/devstack-with-nested-kvm.rst
+++ b/doc/source/guides/devstack-with-nested-kvm.rst
@@ -1,3 +1,5 @@
+.. _kvm_nested_virt:
+
=======================================================
Configure DevStack with KVM-based Nested Virtualization
=======================================================
diff --git a/doc/source/guides/devstack-with-octavia.rst b/doc/source/guides/devstack-with-octavia.rst
new file mode 100644
index 0000000..55939f0
--- /dev/null
+++ b/doc/source/guides/devstack-with-octavia.rst
@@ -0,0 +1,144 @@
+Devstack with Octavia Load Balancing
+====================================
+
+Starting with the OpenStack Pike release, Octavia is now a standalone service
+providing load balancing services for OpenStack.
+
+This guide will show you how to create a devstack with `Octavia API`_ enabled.
+
+.. _Octavia API: https://docs.openstack.org/api-ref/load-balancer/v2/index.html
+
+Phase 1: Create DevStack + 2 nova instances
+--------------------------------------------
+
+First, set up a VM of your choice with at least 8 GB RAM and 16 GB disk space,
+make sure it is updated. Install git and any other developer tools you find
+useful.
+
+Install devstack::
+
+ git clone https://opendev.org/openstack/devstack
+ cd devstack/tools
+ sudo ./create-stack-user.sh
+ cd ../..
+ sudo mv devstack /opt/stack
+ sudo chown -R stack.stack /opt/stack/devstack
+
+This will clone the current devstack code locally, then setup the "stack"
+account that devstack services will run under. Finally, it will move devstack
+into its default location in /opt/stack/devstack.
+
+Edit your ``/opt/stack/devstack/local.conf`` to look like::
+
+ [[local|localrc]]
+ # ===== BEGIN localrc =====
+ DATABASE_PASSWORD=password
+ ADMIN_PASSWORD=password
+ SERVICE_PASSWORD=password
+ SERVICE_TOKEN=password
+ RABBIT_PASSWORD=password
+ GIT_BASE=https://opendev.org
+ # Optional settings:
+ # OCTAVIA_AMP_BASE_OS=centos
+ # OCTAVIA_AMP_DISTRIBUTION_RELEASE_ID=9-stream
+ # OCTAVIA_AMP_IMAGE_SIZE=3
+ # OCTAVIA_LB_TOPOLOGY=ACTIVE_STANDBY
+ # OCTAVIA_ENABLE_AMPHORAV2_JOBBOARD=True
+ # LIBS_FROM_GIT+=octavia-lib,
+ # Enable Logging
+ LOGFILE=$DEST/logs/stack.sh.log
+ VERBOSE=True
+ LOG_COLOR=True
+ enable_service rabbit
+ enable_plugin neutron $GIT_BASE/openstack/neutron
+ # Octavia supports using QoS policies on the VIP port:
+ enable_service q-qos
+ enable_service placement-api placement-client
+ # Octavia services
+ enable_plugin octavia $GIT_BASE/openstack/octavia master
+ enable_plugin octavia-dashboard $GIT_BASE/openstack/octavia-dashboard
+ enable_plugin ovn-octavia-provider $GIT_BASE/openstack/ovn-octavia-provider
+ enable_plugin octavia-tempest-plugin $GIT_BASE/openstack/octavia-tempest-plugin
+ enable_service octavia o-api o-cw o-hm o-hk o-da
+ # If you are enabling barbican for TLS offload in Octavia, include it here.
+ # enable_plugin barbican $GIT_BASE/openstack/barbican
+ # enable_service barbican
+ # Cinder (optional)
+ disable_service c-api c-vol c-sch
+ # Tempest
+ enable_service tempest
+ # ===== END localrc =====
+
+.. note::
+ For best performance it is highly recommended to use KVM
+ virtualization instead of QEMU.
+ Also make sure nested virtualization is enabled as documented in
+ :ref:`the respective guide <kvm_nested_virt>`.
+ By adding ``LIBVIRT_CPU_MODE="host-passthrough"`` to your
+ ``local.conf`` you enable the guest VMs to make use of all features your
+ host's CPU provides.
+
+Run stack.sh and do some sanity checks::
+
+ sudo su - stack
+ cd /opt/stack/devstack
+ ./stack.sh
+ . ./openrc
+
+ openstack network list # should show public and private networks
+
+Create two nova instances that we can use as test http servers::
+
+ # create nova instances on private network
+ openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1
+ openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node2
+ openstack server list # should show the nova instances just created
+
+ # add secgroup rules to allow ssh etc..
+ openstack security group rule create default --protocol icmp
+ openstack security group rule create default --protocol tcp --dst-port 22:22
+ openstack security group rule create default --protocol tcp --dst-port 80:80
+
+Set up a simple web server on each of these instances. One possibility is to use
+the `Golang test server`_ that is used by the Octavia project for CI testing
+as well.
+Copy the binary to your instances and start it as shown below
+(username 'cirros', password 'gocubsgo')::
+
+ INST_IP=<instance IP>
+ scp -O test_server.bin cirros@${INST_IP}:
+ ssh -f cirros@${INST_IP} ./test_server.bin -id ${INST_IP}
+
+When started this way the test server will respond to HTTP requests with
+its own IP.
+
+Phase 2: Create your load balancer
+----------------------------------
+
+Create your load balancer::
+
+ openstack loadbalancer create --wait --name lb1 --vip-subnet-id private-subnet
+ openstack loadbalancer listener create --wait --protocol HTTP --protocol-port 80 --name listener1 lb1
+ openstack loadbalancer pool create --wait --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
+ openstack loadbalancer healthmonitor create --wait --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1
+ openstack loadbalancer member create --wait --subnet-id private-subnet --address <web server 1 address> --protocol-port 80 pool1
+ openstack loadbalancer member create --wait --subnet-id private-subnet --address <web server 2 address> --protocol-port 80 pool1
+
+Please note: The <web server # address> fields are the IP addresses of the nova
+servers created in Phase 1.
+Also note, using the API directly you can do all of the above commands in one
+API call.
+
+Phase 3: Test your load balancer
+--------------------------------
+
+::
+
+ openstack loadbalancer show lb1 # Note the vip_address
+ curl http://<vip_address>
+ curl http://<vip_address>
+
+This should show the "Welcome to <IP>" message from each member server.
+
+
+.. _Golang test server: https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/octavia_tempest_plugin/contrib/test_server
diff --git a/doc/source/guides/single-machine.rst b/doc/source/guides/single-machine.rst
index 0529e30..a4385b5 100644
--- a/doc/source/guides/single-machine.rst
+++ b/doc/source/guides/single-machine.rst
@@ -106,6 +106,9 @@
- Set the service password. This is used by the OpenStack services
(Nova, Glance, etc) to authenticate with Keystone.
+.. warning:: Only use alphanumeric characters in your passwords, as some
+ services fail to work when using special characters.
+
``local.conf`` should look something like this:
.. code-block:: ini
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 0434d68..ba53c6d 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -38,7 +38,7 @@
Start with a clean and minimal install of a Linux system. DevStack
attempts to support the two latest LTS releases of Ubuntu, the
-latest/current Fedora version, CentOS/RHEL 8 and OpenSUSE.
+latest/current Fedora version, CentOS/RHEL/Rocky Linux 9 and OpenSUSE.
If you do not have a preference, Ubuntu 20.04 (Focal Fossa) is the
most tested, and will probably go the smoothest.
@@ -101,7 +101,10 @@
This is the minimum required config to get started with DevStack.
.. note:: There is a sample :download:`local.conf </assets/local.conf>` file
- under the *samples* directory in the devstack repository.
+ under the *samples* directory in the devstack repository.
+
+.. warning:: Only use alphanumeric characters in your passwords, as some
+ services fail to work when using special characters.
Start the install
-----------------
diff --git a/files/rpms/swift b/files/rpms/swift
index 7d906aa..49a1833 100644
--- a/files/rpms/swift
+++ b/files/rpms/swift
@@ -4,4 +4,4 @@
rsync-daemon
sqlite
xfsprogs
-xinetd # not:f35,rhel9
+xinetd # not:f36,rhel9
diff --git a/functions-common b/functions-common
index 92a6678..0aee5d1 100644
--- a/functions-common
+++ b/functions-common
@@ -418,6 +418,9 @@
os_RELEASE=${VERSION_ID}
os_CODENAME="n/a"
os_VENDOR=$(echo $NAME | tr -d '[:space:]')
+ elif [[ "${ID}${VERSION}" =~ "rocky9" ]]; then
+ os_VENDOR="Rocky"
+ os_RELEASE=${VERSION_ID}
else
_ensure_lsb_release
@@ -466,6 +469,7 @@
"$os_VENDOR" =~ (AlmaLinux) || \
"$os_VENDOR" =~ (Scientific) || \
"$os_VENDOR" =~ (OracleServer) || \
+ "$os_VENDOR" =~ (Rocky) || \
"$os_VENDOR" =~ (Virtuozzo) ]]; then
# Drop the . release as we assume it's compatible
# XXX re-evaluate when we get RHEL10
@@ -513,7 +517,7 @@
# Determine if current distribution is a Fedora-based distribution
-# (Fedora, RHEL, CentOS, etc).
+# (Fedora, RHEL, CentOS, Rocky, etc).
# is_fedora
function is_fedora {
if [[ -z "$os_VENDOR" ]]; then
@@ -523,6 +527,7 @@
[ "$os_VENDOR" = "Fedora" ] || [ "$os_VENDOR" = "Red Hat" ] || \
[ "$os_VENDOR" = "RedHatEnterpriseServer" ] || \
[ "$os_VENDOR" = "RedHatEnterprise" ] || \
+ [ "$os_VENDOR" = "Rocky" ] || \
[ "$os_VENDOR" = "CentOS" ] || [ "$os_VENDOR" = "CentOSStream" ] || \
[ "$os_VENDOR" = "AlmaLinux" ] || \
[ "$os_VENDOR" = "OracleServer" ] || [ "$os_VENDOR" = "Virtuozzo" ]
@@ -875,14 +880,9 @@
# Usage: get_or_create_domain <name> <description>
function get_or_create_domain {
local domain_id
- # Gets domain id
domain_id=$(
- # Gets domain id
- openstack --os-cloud devstack-system-admin domain show $1 \
- -f value -c id 2>/dev/null ||
- # Creates new domain
openstack --os-cloud devstack-system-admin domain create $1 \
- --description "$2" \
+ --description "$2" --or-show \
-f value -c id
)
echo $domain_id
@@ -971,29 +971,22 @@
# Usage: get_or_add_user_project_role <role> <user> <project> [<user_domain> <project_domain>]
function get_or_add_user_project_role {
local user_role_id
+ local domain_args
domain_args=$(_get_domain_args $4 $5)
- # Gets user role id
+ # Note this is idempotent so we are safe across multiple
+ # duplicate calls.
+ openstack --os-cloud devstack-system-admin role add $1 \
+ --user $2 \
+ --project $3 \
+ $domain_args
user_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
--role $1 \
--user $2 \
--project $3 \
$domain_args \
- | grep '^|\s[a-f0-9]\+' | get_field 1)
- if [[ -z "$user_role_id" ]]; then
- # Adds role to user and get it
- openstack --os-cloud devstack-system-admin role add $1 \
- --user $2 \
- --project $3 \
- $domain_args
- user_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
- --role $1 \
- --user $2 \
- --project $3 \
- $domain_args \
- | grep '^|\s[a-f0-9]\+' | get_field 1)
- fi
+ -c Role -f value)
echo $user_role_id
}
@@ -1001,23 +994,18 @@
# Usage: get_or_add_user_domain_role <role> <user> <domain>
function get_or_add_user_domain_role {
local user_role_id
- # Gets user role id
+
+ # Note this is idempotent so we are safe across multiple
+ # duplicate calls.
+ openstack --os-cloud devstack-system-admin role add $1 \
+ --user $2 \
+ --domain $3
user_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
--role $1 \
--user $2 \
--domain $3 \
- | grep '^|\s[a-f0-9]\+' | get_field 1)
- if [[ -z "$user_role_id" ]]; then
- # Adds role to user and get it
- openstack --os-cloud devstack-system-admin role add $1 \
- --user $2 \
- --domain $3
- user_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
- --role $1 \
- --user $2 \
- --domain $3 \
- | grep '^|\s[a-f0-9]\+' | get_field 1)
- fi
+ -c Role -f value)
+
echo $user_role_id
}
@@ -1056,23 +1044,18 @@
# Usage: get_or_add_group_project_role <role> <group> <project>
function get_or_add_group_project_role {
local group_role_id
- # Gets group role id
+
+ # Note this is idempotent so we are safe across multiple
+ # duplicate calls.
+ openstack role add $1 \
+ --group $2 \
+ --project $3
group_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
--role $1 \
--group $2 \
--project $3 \
- -f value)
- if [[ -z "$group_role_id" ]]; then
- # Adds role to group and get it
- openstack --os-cloud devstack-system-admin role add $1 \
- --group $2 \
- --project $3
- group_role_id=$(openstack --os-cloud devstack-system-admin role assignment list \
- --role $1 \
- --group $2 \
- --project $3 \
- -f value)
- fi
+ -f value -c Role)
+
echo $group_role_id
}
diff --git a/lib/apache b/lib/apache
index 94f3cfc..705776c 100644
--- a/lib/apache
+++ b/lib/apache
@@ -95,7 +95,7 @@
# didn't fix Python 3.10 compatibility before release. Should be
# fixed in uwsgi 4.9.0; can remove this when packages available
# or we drop this release
- elif is_fedora && ! [[ $DISTRO =~ f35 ]]; then
+ elif is_fedora && ! [[ $DISTRO =~ f36 ]]; then
# Note httpd comes with mod_proxy_uwsgi and it is loaded by
# default; the mod_proxy_uwsgi package actually conflicts now.
# See:
diff --git a/lib/cinder b/lib/cinder
index 7dd7539..bf2fe50 100644
--- a/lib/cinder
+++ b/lib/cinder
@@ -43,6 +43,13 @@
GITDIR["python-brick-cinderclient-ext"]=$DEST/python-brick-cinderclient-ext
CINDER_DIR=$DEST/cinder
+if [[ $SERVICE_IP_VERSION == 6 ]]; then
+ CINDER_MY_IP="$HOST_IPV6"
+else
+ CINDER_MY_IP="$HOST_IP"
+fi
+
+
# Cinder virtual environment
if [[ ${USE_VENV} = True ]]; then
PROJECT_VENV["cinder"]=${CINDER_DIR}.venv
@@ -88,13 +95,32 @@
CINDER_VOLUME_CLEAR=${CINDER_VOLUME_CLEAR:-${CINDER_VOLUME_CLEAR_DEFAULT:-zero}}
CINDER_VOLUME_CLEAR=$(echo ${CINDER_VOLUME_CLEAR} | tr '[:upper:]' '[:lower:]')
-# Default to lioadm
-CINDER_ISCSI_HELPER=${CINDER_ISCSI_HELPER:-lioadm}
+
+if [[ -n "$CINDER_ISCSI_HELPER" ]]; then
+ if [[ -z "$CINDER_TARGET_HELPER" ]]; then
+ deprecated 'Using CINDER_ISCSI_HELPER is deprecated, use CINDER_TARGET_HELPER instead'
+ CINDER_TARGET_HELPER="$CINDER_ISCSI_HELPER"
+ else
+ deprecated 'Deprecated CINDER_ISCSI_HELPER is set, but is being overwritten by CINDER_TARGET_HELPER'
+ fi
+fi
+CINDER_TARGET_HELPER=${CINDER_TARGET_HELPER:-lioadm}
+
+if [[ $CINDER_TARGET_HELPER == 'nvmet' ]]; then
+ CINDER_TARGET_PROTOCOL=${CINDER_TARGET_PROTOCOL:-'nvmet_rdma'}
+ CINDER_TARGET_PREFIX=${CINDER_TARGET_PREFIX:-'nvme-subsystem-1'}
+ CINDER_TARGET_PORT=${CINDER_TARGET_PORT:-4420}
+else
+ CINDER_TARGET_PROTOCOL=${CINDER_TARGET_PROTOCOL:-'iscsi'}
+ CINDER_TARGET_PREFIX=${CINDER_TARGET_PREFIX:-'iqn.2010-10.org.openstack:'}
+ CINDER_TARGET_PORT=${CINDER_TARGET_PORT:-3260}
+fi
+
# EL and SUSE should only use lioadm
if is_fedora || is_suse; then
- if [[ ${CINDER_ISCSI_HELPER} != "lioadm" ]]; then
- die "lioadm is the only valid Cinder target_helper config on this platform"
+ if [[ ${CINDER_TARGET_HELPER} != "lioadm" && ${CINDER_TARGET_HELPER} != 'nvmet' ]]; then
+ die "lioadm and nvmet are the only valid Cinder target_helper config on this platform"
fi
fi
@@ -187,7 +213,7 @@
function cleanup_cinder {
# ensure the volume group is cleared up because fails might
# leave dead volumes in the group
- if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+ if [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
local targets
targets=$(sudo tgtadm --op show --mode target)
if [ $? -ne 0 ]; then
@@ -215,8 +241,14 @@
else
stop_service tgtd
fi
- else
+ elif [ "$CINDER_TARGET_HELPER" = "lioadm" ]; then
sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
+ elif [ "$CINDER_TARGET_HELPER" = "nvmet" ]; then
+ # If we don't disconnect everything vgremove will block
+ sudo nvme disconnect-all
+ sudo nvmetcli clear
+ else
+ die $LINENO "Unknown value \"$CINDER_TARGET_HELPER\" for CINDER_TARGET_HELPER"
fi
if is_service_enabled c-vol && [[ -n "$CINDER_ENABLED_BACKENDS" ]]; then
@@ -267,7 +299,7 @@
iniset $CINDER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
- iniset $CINDER_CONF DEFAULT target_helper "$CINDER_ISCSI_HELPER"
+ iniset $CINDER_CONF DEFAULT target_helper "$CINDER_TARGET_HELPER"
iniset $CINDER_CONF database connection `database_connection_url cinder`
iniset $CINDER_CONF DEFAULT api_paste_config $CINDER_API_PASTE_INI
iniset $CINDER_CONF DEFAULT rootwrap_config "$CINDER_CONF_DIR/rootwrap.conf"
@@ -275,11 +307,7 @@
iniset $CINDER_CONF DEFAULT osapi_volume_listen $CINDER_SERVICE_LISTEN_ADDRESS
iniset $CINDER_CONF DEFAULT state_path $CINDER_STATE_PATH
iniset $CINDER_CONF oslo_concurrency lock_path $CINDER_STATE_PATH
- if [[ $SERVICE_IP_VERSION == 6 ]]; then
- iniset $CINDER_CONF DEFAULT my_ip "$HOST_IPV6"
- else
- iniset $CINDER_CONF DEFAULT my_ip "$HOST_IP"
- fi
+ iniset $CINDER_CONF DEFAULT my_ip "$CINDER_MY_IP"
iniset $CINDER_CONF key_manager backend cinder.keymgr.conf_key_mgr.ConfKeyManager
iniset $CINDER_CONF key_manager fixed_key $(openssl rand -hex 16)
if [[ -n "$CINDER_ALLOWED_DIRECT_URL_SCHEMES" ]]; then
@@ -473,9 +501,9 @@
function install_cinder {
git_clone $CINDER_REPO $CINDER_DIR $CINDER_BRANCH
setup_develop $CINDER_DIR
- if [[ "$CINDER_ISCSI_HELPER" == "tgtadm" ]]; then
+ if [[ "$CINDER_TARGET_HELPER" == "tgtadm" ]]; then
install_package tgt
- elif [[ "$CINDER_ISCSI_HELPER" == "lioadm" ]]; then
+ elif [[ "$CINDER_TARGET_HELPER" == "lioadm" ]]; then
if is_ubuntu; then
# TODO(frickler): Workaround for https://launchpad.net/bugs/1819819
sudo mkdir -p /etc/target
@@ -484,6 +512,43 @@
else
install_package targetcli
fi
+ elif [[ "$CINDER_TARGET_HELPER" == "nvmet" ]]; then
+ install_package nvme-cli
+
+ # TODO: Remove manual installation of the dependency when the
+ # requirement is added to nvmetcli:
+ # http://lists.infradead.org/pipermail/linux-nvme/2022-July/033576.html
+ if is_ubuntu; then
+ install_package python3-configshell-fb
+ else
+ install_package python3-configshell
+ fi
+ # Install from source because Ubuntu doesn't have the package and some packaged versions didn't work on Python 3
+ pip_install git+git://git.infradead.org/users/hch/nvmetcli.git
+
+ sudo modprobe nvmet
+ sudo modprobe nvme-fabrics
+
+ if [[ $CINDER_TARGET_PROTOCOL == 'nvmet_rdma' ]]; then
+ install_package rdma-core
+ sudo modprobe nvme-rdma
+
+ # Create the Soft-RoCE device over the networking interface
+ local iface=${HOST_IP_IFACE:-`ip -br -$SERVICE_IP_VERSION a | grep $CINDER_MY_IP | awk '{print $1}'`}
+ if [[ -z "$iface" ]]; then
+ die $LINENO "Cannot find interface to bind Soft-RoCE"
+ fi
+
+ if ! sudo rdma link | grep $iface ; then
+ sudo rdma link add rxe_$iface type rxe netdev $iface
+ fi
+
+ elif [[ $CINDER_TARGET_PROTOCOL == 'nvmet_tcp' ]]; then
+ sudo modprobe nvme-tcp
+
+ else # 'nvmet_fc'
+ sudo modprobe nvme-fc
+ fi
fi
}
@@ -520,7 +585,7 @@
service_port=$CINDER_SERVICE_PORT_INT
service_protocol="http"
fi
- if [ "$CINDER_ISCSI_HELPER" = "tgtadm" ]; then
+ if [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
if is_service_enabled c-vol; then
# Delete any old stack.conf
sudo rm -f /etc/tgt/conf.d/stack.conf
diff --git a/lib/cinder_backends/fake_gate b/lib/cinder_backends/fake_gate
index 3ffd9a6..3b9f1d1 100644
--- a/lib/cinder_backends/fake_gate
+++ b/lib/cinder_backends/fake_gate
@@ -50,7 +50,7 @@
iniset $CINDER_CONF $be_name volume_backend_name $be_name
iniset $CINDER_CONF $be_name volume_driver "cinder.tests.fake_driver.FakeGateDriver"
iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
- iniset $CINDER_CONF $be_name target_helper "$CINDER_ISCSI_HELPER"
+ iniset $CINDER_CONF $be_name target_helper "$CINDER_TARGET_HELPER"
iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
if [[ "$CINDER_VOLUME_CLEAR" == "non" ]]; then
diff --git a/lib/cinder_backends/lvm b/lib/cinder_backends/lvm
index e03ef14..4286511 100644
--- a/lib/cinder_backends/lvm
+++ b/lib/cinder_backends/lvm
@@ -50,7 +50,10 @@
iniset $CINDER_CONF $be_name volume_backend_name $be_name
iniset $CINDER_CONF $be_name volume_driver "cinder.volume.drivers.lvm.LVMVolumeDriver"
iniset $CINDER_CONF $be_name volume_group $VOLUME_GROUP_NAME-$be_name
- iniset $CINDER_CONF $be_name target_helper "$CINDER_ISCSI_HELPER"
+ iniset $CINDER_CONF $be_name target_helper "$CINDER_TARGET_HELPER"
+ iniset $CINDER_CONF $be_name target_protocol "$CINDER_TARGET_PROTOCOL"
+ iniset $CINDER_CONF $be_name target_port "$CINDER_TARGET_PORT"
+ iniset $CINDER_CONF $be_name target_prefix "$CINDER_TARGET_PREFIX"
iniset $CINDER_CONF $be_name lvm_type "$CINDER_LVM_TYPE"
iniset $CINDER_CONF $be_name volume_clear "$CINDER_VOLUME_CLEAR"
}
diff --git a/lib/dstat b/lib/dstat
index eb03ae0..870c901 100644
--- a/lib/dstat
+++ b/lib/dstat
@@ -40,12 +40,18 @@
if is_service_enabled peakmem_tracker; then
die $LINENO "The peakmem_tracker service has been removed, use memory_tracker instead"
fi
+
+ # To enable file_tracker add:
+ # enable_service file_tracker
+ # to your localrc
+ run_process file_tracker "$TOP_DIR/tools/file_tracker.sh"
}
# stop_dstat() stop dstat process
function stop_dstat {
stop_process dstat
stop_process memory_tracker
+ stop_process file_tracker
}
# Restore xtrace
diff --git a/lib/lvm b/lib/lvm
index d3f6bf1..57ffb96 100644
--- a/lib/lvm
+++ b/lib/lvm
@@ -130,7 +130,7 @@
local size=$2
# Start the tgtd service on Fedora and SUSE if tgtadm is used
- if is_fedora || is_suse && [[ "$CINDER_ISCSI_HELPER" = "tgtadm" ]]; then
+ if is_fedora || is_suse && [[ "$CINDER_TARGET_HELPER" = "tgtadm" ]]; then
start_service tgtd
fi
@@ -138,10 +138,14 @@
_create_lvm_volume_group $vg $size
# Remove iscsi targets
- if [ "$CINDER_ISCSI_HELPER" = "lioadm" ]; then
+ if [ "$CINDER_TARGET_HELPER" = "lioadm" ]; then
sudo cinder-rtstool get-targets | sudo xargs -rn 1 cinder-rtstool delete
- else
+ elif [ "$CINDER_TARGET_HELPER" = "tgtadm" ]; then
sudo tgtadm --op show --mode target | awk '/Target/ {print $3}' | sudo xargs -r -n1 tgt-admin --delete
+ elif [ "$CINDER_TARGET_HELPER" = "nvmet" ]; then
+ # If we don't disconnect everything vgremove will block
+ sudo nvme disconnect-all
+ sudo nvmetcli clear
fi
_clean_lvm_volume_group $vg
}
diff --git a/lib/neutron_plugins/ovn_agent b/lib/neutron_plugins/ovn_agent
index 8eb2993..e64224c 100644
--- a/lib/neutron_plugins/ovn_agent
+++ b/lib/neutron_plugins/ovn_agent
@@ -244,11 +244,12 @@
local cmd="$2"
local stop_cmd="$3"
local group=$4
- local user=${5:-$STACK_USER}
+ local user=$5
+ local rundir=${6:-$OVS_RUNDIR}
local systemd_service="devstack@$service.service"
local unit_file="$SYSTEMD_DIR/$systemd_service"
- local environment="OVN_RUNDIR=$OVS_RUNDIR OVN_DBDIR=$OVN_DATADIR OVN_LOGDIR=$LOGDIR OVS_RUNDIR=$OVS_RUNDIR OVS_DBDIR=$OVS_DATADIR OVS_LOGDIR=$LOGDIR"
+ local environment="OVN_RUNDIR=$OVN_RUNDIR OVN_DBDIR=$OVN_DATADIR OVN_LOGDIR=$LOGDIR OVS_RUNDIR=$OVS_RUNDIR OVS_DBDIR=$OVS_DATADIR OVS_LOGDIR=$LOGDIR"
echo "Starting $service executed command": $cmd
@@ -264,14 +265,14 @@
_start_process $systemd_service
- local testcmd="test -e $OVS_RUNDIR/$service.pid"
+ local testcmd="test -e $rundir/$service.pid"
test_with_retry "$testcmd" "$service did not start" $SERVICE_TIMEOUT 1
local service_ctl_file
- service_ctl_file=$(ls $OVS_RUNDIR | grep $service | grep ctl)
+ service_ctl_file=$(ls $rundir | grep $service | grep ctl)
if [ -z "$service_ctl_file" ]; then
die $LINENO "ctl file for service $service is not present."
fi
- sudo ovs-appctl -t $OVS_RUNDIR/$service_ctl_file vlog/set console:off syslog:info file:info
+ sudo ovs-appctl -t $rundir/$service_ctl_file vlog/set console:off syslog:info file:info
}
function clone_repository {
@@ -370,10 +371,6 @@
sudo mkdir -p $OVS_RUNDIR
sudo chown $(whoami) $OVS_RUNDIR
- # NOTE(lucasagomes): To keep things simpler, let's reuse the same
- # RUNDIR for both OVS and OVN. This way we avoid having to specify the
- # --db option in the ovn-{n,s}bctl commands while playing with DevStack
- sudo ln -s $OVS_RUNDIR $OVN_RUNDIR
if [[ "$OVN_BUILD_FROM_SOURCE" == "True" ]]; then
# If OVS is already installed, remove it, because we're about to
@@ -616,12 +613,12 @@
dbcmd+=" --remote=db:hardware_vtep,Global,managers $OVS_DATADIR/vtep.db"
fi
dbcmd+=" $OVS_DATADIR/conf.db"
- _run_process ovsdb-server "$dbcmd" "" "$STACK_GROUP" "root"
+ _run_process ovsdb-server "$dbcmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
# Note: ovn-controller will create and configure br-int once it is started.
# So, no need to create it now because nothing depends on that bridge here.
local ovscmd="$OVS_SBINDIR/ovs-vswitchd --log-file --pidfile --detach"
- _run_process ovs-vswitchd "$ovscmd" "" "$STACK_GROUP" "root"
+ _run_process ovs-vswitchd "$ovscmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
else
_start_process "$OVSDB_SERVER_SERVICE"
_start_process "$OVS_VSWITCHD_SERVICE"
@@ -660,7 +657,7 @@
enable_service ovs-vtep
local vtepcmd="$OVS_SCRIPTDIR/ovs-vtep --log-file --pidfile --detach br-v"
- _run_process ovs-vtep "$vtepcmd" "" "$STACK_GROUP" "root"
+ _run_process ovs-vtep "$vtepcmd" "" "$STACK_GROUP" "root" "$OVS_RUNDIR"
vtep-ctl set-manager tcp:$HOST_IP:6640
fi
@@ -704,26 +701,26 @@
local cmd="/bin/bash $SCRIPTDIR/ovn-ctl --no-monitor start_northd"
local stop_cmd="/bin/bash $SCRIPTDIR/ovn-ctl stop_northd"
- _run_process ovn-northd "$cmd" "$stop_cmd" "$STACK_GROUP" "root"
+ _run_process ovn-northd "$cmd" "$stop_cmd" "$STACK_GROUP" "root" "$OVN_RUNDIR"
else
_start_process "$OVN_NORTHD_SERVICE"
fi
# Wait for the service to be ready
# Check for socket and db files for both OVN NB and SB
- wait_for_sock_file $OVS_RUNDIR/ovnnb_db.sock
- wait_for_sock_file $OVS_RUNDIR/ovnsb_db.sock
+ wait_for_sock_file $OVN_RUNDIR/ovnnb_db.sock
+ wait_for_sock_file $OVN_RUNDIR/ovnsb_db.sock
wait_for_db_file $OVN_DATADIR/ovnnb_db.db
wait_for_db_file $OVN_DATADIR/ovnsb_db.db
if is_service_enabled tls-proxy; then
- sudo ovn-nbctl --db=unix:$OVS_RUNDIR/ovnnb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
- sudo ovn-sbctl --db=unix:$OVS_RUNDIR/ovnsb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
+ sudo ovn-nbctl --db=unix:$OVN_RUNDIR/ovnnb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
+ sudo ovn-sbctl --db=unix:$OVN_RUNDIR/ovnsb_db.sock set-ssl $INT_CA_DIR/private/$DEVSTACK_CERT_NAME.key $INT_CA_DIR/$DEVSTACK_CERT_NAME.crt $INT_CA_DIR/ca-chain.pem
fi
- sudo ovn-nbctl --db=unix:$OVS_RUNDIR/ovnnb_db.sock set-connection p${OVN_PROTO}:6641:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
- sudo ovn-sbctl --db=unix:$OVS_RUNDIR/ovnsb_db.sock set-connection p${OVN_PROTO}:6642:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
- sudo ovs-appctl -t $OVS_RUNDIR/ovnnb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
- sudo ovs-appctl -t $OVS_RUNDIR/ovnsb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
+ sudo ovn-nbctl --db=unix:$OVN_RUNDIR/ovnnb_db.sock set-connection p${OVN_PROTO}:6641:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
+ sudo ovn-sbctl --db=unix:$OVN_RUNDIR/ovnsb_db.sock set-connection p${OVN_PROTO}:6642:$SERVICE_LISTEN_ADDRESS -- set connection . inactivity_probe=60000
+ sudo ovs-appctl -t $OVN_RUNDIR/ovnnb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
+ sudo ovs-appctl -t $OVN_RUNDIR/ovnsb_db.ctl vlog/set console:off syslog:$OVN_DBS_LOG_LEVEL file:$OVN_DBS_LOG_LEVEL
fi
if is_service_enabled ovn-controller ; then
@@ -731,7 +728,7 @@
local cmd="/bin/bash $SCRIPTDIR/ovn-ctl --no-monitor start_controller"
local stop_cmd="/bin/bash $SCRIPTDIR/ovn-ctl stop_controller"
- _run_process ovn-controller "$cmd" "$stop_cmd" "$STACK_GROUP" "root"
+ _run_process ovn-controller "$cmd" "$stop_cmd" "$STACK_GROUP" "root" "$OVN_RUNDIR"
else
_start_process "$OVN_CONTROLLER_SERVICE"
fi
@@ -740,7 +737,7 @@
if is_service_enabled ovn-controller-vtep ; then
if [[ "$OVN_BUILD_FROM_SOURCE" == "True" ]]; then
local cmd="$OVS_BINDIR/ovn-controller-vtep --log-file --pidfile --detach --ovnsb-db=$OVN_SB_REMOTE"
- _run_process ovn-controller-vtep "$cmd" "" "$STACK_GROUP" "root"
+ _run_process ovn-controller-vtep "$cmd" "" "$STACK_GROUP" "root" "$OVN_RUNDIR"
else
_start_process "$OVN_CONTROLLER_VTEP_SERVICE"
fi
diff --git a/lib/neutron_plugins/ovs_source b/lib/neutron_plugins/ovs_source
index 164d574..ea71e60 100644
--- a/lib/neutron_plugins/ovs_source
+++ b/lib/neutron_plugins/ovs_source
@@ -33,9 +33,9 @@
local fatal=$2
if [ "$(trueorfalse True fatal)" == "True" ]; then
- sudo modprobe $module || (dmesg && die $LINENO "FAILED TO LOAD $module")
+ sudo modprobe $module || (sudo dmesg && die $LINENO "FAILED TO LOAD $module")
else
- sudo modprobe $module || (echo "FAILED TO LOAD $module" && dmesg)
+ sudo modprobe $module || (echo "FAILED TO LOAD $module" && sudo dmesg)
fi
}
@@ -103,7 +103,7 @@
function load_ovs_kernel_modules {
load_module openvswitch
load_module vport-geneve False
- dmesg | tail
+ sudo dmesg | tail
}
# reload_ovs_kernel_modules() - reload openvswitch kernel module
diff --git a/lib/nova b/lib/nova
index 6de1d33..e223e0f 100644
--- a/lib/nova
+++ b/lib/nova
@@ -97,6 +97,12 @@
METADATA_SERVICE_PORT=${METADATA_SERVICE_PORT:-8775}
NOVA_ENABLE_CACHE=${NOVA_ENABLE_CACHE:-True}
+if [[ $SERVICE_IP_VERSION == 6 ]]; then
+ NOVA_MY_IP="$HOST_IPV6"
+else
+ NOVA_MY_IP="$HOST_IP"
+fi
+
# Option to enable/disable config drive
# NOTE: Set ``FORCE_CONFIG_DRIVE="False"`` to turn OFF config drive
FORCE_CONFIG_DRIVE=${FORCE_CONFIG_DRIVE:-"False"}
@@ -205,6 +211,9 @@
done
sudo iscsiadm --mode node --op delete || true
+ # Disconnect all nvmeof connections
+ sudo nvme disconnect-all || true
+
# Clean out the instances directory.
sudo rm -rf $NOVA_INSTANCES_PATH/*
fi
@@ -292,6 +301,7 @@
fi
fi
+ # Due to cinder bug #1966513 we ALWAYS need an initiator name for LVM
# Ensure each compute host uses a unique iSCSI initiator
echo InitiatorName=$(iscsi-iname) | sudo tee /etc/iscsi/initiatorname.iscsi
@@ -312,8 +322,28 @@
# not work under FIPS.
iniset -sudo /etc/iscsi/iscsid.conf DEFAULT "node.session.auth.chap_algs" "SHA3-256,SHA256"
- # ensure that iscsid is started, even when disabled by default
- restart_service iscsid
+ if [[ $CINDER_TARGET_HELPER != 'nvmet' ]]; then
+ # ensure that iscsid is started, even when disabled by default
+ restart_service iscsid
+
+ # For NVMe-oF we need different packages that many not be present
+ else
+ install_package nvme-cli
+ sudo modprobe nvme-fabrics
+
+ # Ensure NVMe is ready and create the Soft-RoCE device over the networking interface
+ if [[ $CINDER_TARGET_PROTOCOL == 'nvmet_rdma' ]]; then
+ sudo modprobe nvme-rdma
+ iface=${HOST_IP_IFACE:-`ip -br -$SERVICE_IP_VERSION a | grep $NOVA_MY_IP | awk '{print $1}'`}
+ if ! sudo rdma link | grep $iface ; then
+ sudo rdma link add rxe_$iface type rxe netdev $iface
+ fi
+ elif [[ $CINDER_TARGET_PROTOCOL == 'nvmet_tcp' ]]; then
+ sudo modprobe nvme-tcp
+ else # 'nvmet_fc'
+ sudo modprobe nvme-fc
+ fi
+ fi
fi
# Rebuild the config file from scratch
@@ -404,11 +434,7 @@
iniset $NOVA_CONF filter_scheduler enabled_filters "$NOVA_FILTERS"
iniset $NOVA_CONF scheduler workers "$API_WORKERS"
iniset $NOVA_CONF neutron default_floating_pool "$PUBLIC_NETWORK_NAME"
- if [[ $SERVICE_IP_VERSION == 6 ]]; then
- iniset $NOVA_CONF DEFAULT my_ip "$HOST_IPV6"
- else
- iniset $NOVA_CONF DEFAULT my_ip "$HOST_IP"
- fi
+ iniset $NOVA_CONF DEFAULT my_ip "$NOVA_MY_IP"
iniset $NOVA_CONF DEFAULT instance_name_template "${INSTANCE_NAME_PREFIX}%08x"
iniset $NOVA_CONF DEFAULT osapi_compute_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
iniset $NOVA_CONF DEFAULT metadata_listen "$NOVA_SERVICE_LISTEN_ADDRESS"
@@ -885,8 +911,23 @@
# a websockets/html5 or flash powered VNC console for vm instances
NOVNC_FROM_PACKAGE=$(trueorfalse False NOVNC_FROM_PACKAGE)
if [ "$NOVNC_FROM_PACKAGE" = "True" ]; then
+ # Installing novnc on Debian bullseye breaks the global pip
+ # install. This happens because novnc pulls in distro cryptography
+ # which will be prefered by distro pip, but if anything has
+ # installed pyOpenSSL from pypi (keystone) that is not compatible
+ # with distro cryptography. Fix this by installing
+ # python3-openssl (pyOpenSSL) from the distro which pip will prefer
+ # on Debian. Ubuntu has inverse problems so we only do this for
+ # Debian.
+ local novnc_packages
+ novnc_packages="novnc"
+ GetOSVersion
+ if [[ "$os_VENDOR" = "Debian" ]] ; then
+ novnc_packages="$novnc_packages python3-openssl"
+ fi
+
NOVNC_WEB_DIR=/usr/share/novnc
- install_package novnc
+ install_package $novnc_packages
else
NOVNC_WEB_DIR=$DEST/novnc
git_clone $NOVNC_REPO $NOVNC_WEB_DIR $NOVNC_BRANCH
diff --git a/stack.sh b/stack.sh
index c99189e..cc90fca 100755
--- a/stack.sh
+++ b/stack.sh
@@ -12,7 +12,7 @@
# a multi-node developer install.
# To keep this script simple we assume you are running on a recent **Ubuntu**
-# (Bionic or newer), **Fedora** (F24 or newer), or **CentOS/RHEL**
+# (Bionic or newer), **Fedora** (F36 or newer), or **CentOS/RHEL**
# (7 or newer) machine. (It may work on other platforms but support for those
# platforms is left to those who added them to DevStack.) It should work in
# a VM or physical server. Additionally, we maintain a list of ``deb`` and
@@ -229,7 +229,7 @@
# Warn users who aren't on an explicitly supported distro, but allow them to
# override check and attempt installation with ``FORCE=yes ./stack``
-SUPPORTED_DISTROS="bullseye|focal|jammy|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9"
+SUPPORTED_DISTROS="bullseye|focal|jammy|f36|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9"
if [[ ! ${DISTRO} =~ $SUPPORTED_DISTROS ]]; then
echo "WARNING: this script has not been tested on $DISTRO"
diff --git a/stackrc b/stackrc
index b3130e5..a05d1e5 100644
--- a/stackrc
+++ b/stackrc
@@ -243,7 +243,7 @@
# Setting the variable to 'ALL' will activate the download for all
# libraries.
-DEVSTACK_SERIES="zed"
+DEVSTACK_SERIES="2023.1"
##############
#
diff --git a/tools/file_tracker.sh b/tools/file_tracker.sh
new file mode 100755
index 0000000..9c31b30
--- /dev/null
+++ b/tools/file_tracker.sh
@@ -0,0 +1,47 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+set -o errexit
+
+# time to sleep between checks
+SLEEP_TIME=20
+
+function tracker {
+ echo "Number of open files | Number of open files not in use | Maximum number of files allowed to be opened"
+ while true; do
+ cat /proc/sys/fs/file-nr
+ sleep $SLEEP_TIME
+ done
+}
+
+function usage {
+ echo "Usage: $0 [-x] [-s N]" 1>&2
+ exit 1
+}
+
+while getopts ":s:x" opt; do
+ case $opt in
+ s)
+ SLEEP_TIME=$OPTARG
+ ;;
+ x)
+ set -o xtrace
+ ;;
+ *)
+ usage
+ ;;
+ esac
+done
+
+tracker