blob: 092809a1cff600d3b9ed145a80419e02144f25f1 [file] [log] [blame]
Sean M. Collins34296012014-10-27 11:57:20 -04001======================================
Shilla Saebi2ed09d82015-04-21 15:02:13 -04002Using DevStack with neutron Networking
Sean M. Collins34296012014-10-27 11:57:20 -04003======================================
4
Shilla Saebi2ed09d82015-04-21 15:02:13 -04005This guide will walk you through using OpenStack neutron with the ML2
Sean M. Collins34296012014-10-27 11:57:20 -04006plugin and the Open vSwitch mechanism driver.
7
Sean M. Collins34296012014-10-27 11:57:20 -04008
Sean M. Collins2977b302016-01-25 09:10:52 -05009.. _single-interface-ovs:
10
Sean M. Collins02ae50d2015-03-20 09:58:55 -070011Using Neutron with a Single Interface
12=====================================
13
14In some instances, like on a developer laptop, there is only one
15network interface that is available. In this scenario, the physical
16interface is added to the Open vSwitch bridge, and the IP address of
17the laptop is migrated onto the bridge interface. That way, the
Sean Daguedb48db12016-04-06 08:09:31 -040018physical interface can be used to transmit self service project
19network traffic, the OpenStack API traffic, and management traffic.
Sean M. Collins02ae50d2015-03-20 09:58:55 -070020
21
Sean M. Collins6b1f4992016-03-10 12:23:09 -050022.. warning::
23
24 When using a single interface networking setup, there will be a
25 temporary network outage as your IP address is moved from the
26 physical NIC of your machine, to the OVS bridge. If you are SSH'd
27 into the machine from another computer, there is a risk of being
28 disconnected from your ssh session (due to arp cache
29 invalidation), which would stop the stack.sh or leave it in an
30 unfinished state. In these cases, start stack.sh inside its own
31 screen session so it can continue to run.
32
33
Sean M. Collins02ae50d2015-03-20 09:58:55 -070034Physical Network Setup
35----------------------
36
37In most cases where DevStack is being deployed with a single
38interface, there is a hardware router that is being used for external
39connectivity and DHCP. The developer machine is connected to this
Mike Spreitzer4baa4ce2016-01-26 14:06:17 -050040network and is on a shared subnet with other machines. The
41`local.conf` exhibited here assumes that 1500 is a reasonable MTU to
42use on that network.
Sean M. Collins02ae50d2015-03-20 09:58:55 -070043
44.. nwdiag::
45
46 nwdiag {
47 inet [ shape = cloud ];
48 router;
49 inet -- router;
50
51 network hardware_network {
52 address = "172.18.161.0/24"
53 router [ address = "172.18.161.1" ];
Sean M. Collins16501662015-10-12 11:01:44 -040054 devstack-1 [ address = "172.18.161.6" ];
Sean M. Collins02ae50d2015-03-20 09:58:55 -070055 }
56 }
57
58
59DevStack Configuration
60----------------------
61
Sean M. Collins16501662015-10-12 11:01:44 -040062The following is a complete `local.conf` for the host named
63`devstack-1`. It will run all the API and services, as well as
64serving as a hypervisor for guest instances.
Sean M. Collins02ae50d2015-03-20 09:58:55 -070065
66::
67
Sean M. Collins16501662015-10-12 11:01:44 -040068 [[local|localrc]]
Sean M. Collins02ae50d2015-03-20 09:58:55 -070069 HOST_IP=172.18.161.6
70 SERVICE_HOST=172.18.161.6
71 MYSQL_HOST=172.18.161.6
72 RABBIT_HOST=172.18.161.6
73 GLANCE_HOSTPORT=172.18.161.6:9292
Balagopal7ed812c2016-03-01 04:43:31 +000074 ADMIN_PASSWORD=secret
75 DATABASE_PASSWORD=secret
76 RABBIT_PASSWORD=secret
77 SERVICE_PASSWORD=secret
Sean M. Collins02ae50d2015-03-20 09:58:55 -070078
79 ## Neutron options
80 Q_USE_SECGROUP=True
Christian Berendt1c394822015-09-10 12:15:16 +020081 FLOATING_RANGE="172.18.161.0/24"
Kevin Benton4bfbc292016-11-15 17:26:05 -080082 IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/22"
Sean M. Collins02ae50d2015-03-20 09:58:55 -070083 Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
84 PUBLIC_NETWORK_GATEWAY="172.18.161.1"
Sean M. Collins02ae50d2015-03-20 09:58:55 -070085 PUBLIC_INTERFACE=eth0
Sean M. Collins2977b302016-01-25 09:10:52 -050086
87 # Open vSwitch provider networking configuration
Sean M. Collins02ae50d2015-03-20 09:58:55 -070088 Q_USE_PROVIDERNET_FOR_PUBLIC=True
89 OVS_PHYSICAL_BRIDGE=br-ex
90 PUBLIC_BRIDGE=br-ex
91 OVS_BRIDGE_MAPPINGS=public:br-ex
92
93
Sean M. Collins16501662015-10-12 11:01:44 -040094Adding Additional Compute Nodes
95-------------------------------
96
97Let's suppose that after installing DevStack on the first host, you
98also want to do multinode testing and networking.
99
100Physical Network Setup
101~~~~~~~~~~~~~~~~~~~~~~
102
103.. nwdiag::
104
105 nwdiag {
106 inet [ shape = cloud ];
107 router;
108 inet -- router;
109
110 network hardware_network {
111 address = "172.18.161.0/24"
112 router [ address = "172.18.161.1" ];
113 devstack-1 [ address = "172.18.161.6" ];
114 devstack-2 [ address = "172.18.161.7" ];
115 }
116 }
117
118
119After DevStack installs and configures Neutron, traffic from guest VMs
120flows out of `devstack-2` (the compute node) and is encapsulated in a
121VXLAN tunnel back to `devstack-1` (the control node) where the L3
122agent is running.
123
124::
125
126 stack@devstack-2:~/devstack$ sudo ovs-vsctl show
127 8992d965-0ba0-42fd-90e9-20ecc528bc29
128 Bridge br-int
129 fail_mode: secure
130 Port br-int
131 Interface br-int
132 type: internal
133 Port patch-tun
134 Interface patch-tun
135 type: patch
136 options: {peer=patch-int}
137 Bridge br-tun
138 fail_mode: secure
139 Port "vxlan-c0a801f6"
140 Interface "vxlan-c0a801f6"
141 type: vxlan
142 options: {df_default="true", in_key=flow, local_ip="172.18.161.7", out_key=flow, remote_ip="172.18.161.6"}
143 Port patch-int
144 Interface patch-int
145 type: patch
146 options: {peer=patch-tun}
147 Port br-tun
148 Interface br-tun
149 type: internal
150 ovs_version: "2.0.2"
151
152Open vSwitch on the control node, where the L3 agent runs, is
153configured to de-encapsulate traffic from compute nodes, then forward
154it over the `br-ex` bridge, where `eth0` is attached.
155
156::
157
158 stack@devstack-1:~/devstack$ sudo ovs-vsctl show
159 422adeea-48d1-4a1f-98b1-8e7239077964
160 Bridge br-tun
161 fail_mode: secure
162 Port br-tun
163 Interface br-tun
164 type: internal
165 Port patch-int
166 Interface patch-int
167 type: patch
168 options: {peer=patch-tun}
169 Port "vxlan-c0a801d8"
170 Interface "vxlan-c0a801d8"
171 type: vxlan
172 options: {df_default="true", in_key=flow, local_ip="172.18.161.6", out_key=flow, remote_ip="172.18.161.7"}
173 Bridge br-ex
174 Port phy-br-ex
175 Interface phy-br-ex
176 type: patch
177 options: {peer=int-br-ex}
178 Port "eth0"
179 Interface "eth0"
180 Port br-ex
181 Interface br-ex
182 type: internal
183 Bridge br-int
184 fail_mode: secure
185 Port "tapce66332d-ea"
186 tag: 1
187 Interface "tapce66332d-ea"
188 type: internal
189 Port "qg-65e5a4b9-15"
190 tag: 2
191 Interface "qg-65e5a4b9-15"
192 type: internal
193 Port "qr-33e5e471-88"
194 tag: 1
195 Interface "qr-33e5e471-88"
196 type: internal
197 Port "qr-acbe9951-70"
198 tag: 1
199 Interface "qr-acbe9951-70"
200 type: internal
201 Port br-int
202 Interface br-int
203 type: internal
204 Port patch-tun
205 Interface patch-tun
206 type: patch
207 options: {peer=patch-int}
208 Port int-br-ex
209 Interface int-br-ex
210 type: patch
211 options: {peer=phy-br-ex}
212 ovs_version: "2.0.2"
213
214`br-int` is a bridge that the Open vSwitch mechanism driver creates,
215which is used as the "integration bridge" where ports are created, and
216plugged into the virtual switching fabric. `br-ex` is an OVS bridge
217that is used to connect physical ports (like `eth0`), so that floating
Sean Daguedb48db12016-04-06 08:09:31 -0400218IP traffic for project networks can be received from the physical
219network infrastructure (and the internet), and routed to self service
220project network ports. `br-tun` is a tunnel bridge that is used to
221connect OpenStack nodes (like `devstack-2`) together. This bridge is
222used so that project network traffic, using the VXLAN tunneling
223protocol, flows between each compute node where project instances run.
Sean M. Collins16501662015-10-12 11:01:44 -0400224
225
226
227DevStack Compute Configuration
228~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
229
230The host `devstack-2` has a very minimal `local.conf`.
231
232::
233
234 [[local|localrc]]
235 HOST_IP=172.18.161.7
236 SERVICE_HOST=172.18.161.6
237 MYSQL_HOST=172.18.161.6
238 RABBIT_HOST=172.18.161.6
239 GLANCE_HOSTPORT=172.18.161.6:9292
Balagopal7ed812c2016-03-01 04:43:31 +0000240 ADMIN_PASSWORD=secret
241 MYSQL_PASSWORD=secret
242 RABBIT_PASSWORD=secret
243 SERVICE_PASSWORD=secret
Sean M. Collins16501662015-10-12 11:01:44 -0400244
245 ## Neutron options
246 PUBLIC_INTERFACE=eth0
247 ENABLED_SERVICES=n-cpu,rabbit,q-agt
248
249Network traffic from `eth0` on the compute nodes is then NAT'd by the
250controller node that runs Neutron's `neutron-l3-agent` and provides L3
251connectivity.
252
Sean M. Collins02ae50d2015-03-20 09:58:55 -0700253
Sean M. Collins34296012014-10-27 11:57:20 -0400254Neutron Networking with Open vSwitch and Provider Networks
255==========================================================
256
Shilla Saebi2ed09d82015-04-21 15:02:13 -0400257In some instances, it is desirable to use neutron's provider
Sean M. Collins34296012014-10-27 11:57:20 -0400258networking extension, so that networks that are configured on an
Shilla Saebi2ed09d82015-04-21 15:02:13 -0400259external router can be utilized by neutron, and instances created via
Sean M. Collins34296012014-10-27 11:57:20 -0400260Nova can attach to the network managed by the external router.
261
262For example, in some lab environments, a hardware router has been
263pre-configured by another party, and an OpenStack developer has been
264given a VLAN tag and IP address range, so that instances created via
265DevStack will use the external router for L3 connectivity, as opposed
Shilla Saebi2ed09d82015-04-21 15:02:13 -0400266to the neutron L3 service.
Sean M. Collins34296012014-10-27 11:57:20 -0400267
Sean M. Collins4696db92015-10-09 12:31:57 -0400268Physical Network Setup
269----------------------
270
271.. nwdiag::
272
273 nwdiag {
274 inet [ shape = cloud ];
275 router;
276 inet -- router;
277
278 network provider_net {
279 address = "203.0.113.0/24"
280 router [ address = "203.0.113.1" ];
281 controller;
282 compute1;
283 compute2;
284 }
285
286 network control_plane {
287 router [ address = "10.0.0.1" ]
288 address = "10.0.0.0/24"
289 controller [ address = "10.0.0.2" ]
290 compute1 [ address = "10.0.0.3" ]
291 compute2 [ address = "10.0.0.4" ]
292 }
293 }
294
295
Sean M. Collins887f1822015-10-12 10:36:34 -0400296On a compute node, the first interface, eth0 is used for the OpenStack
297management (API, message bus, etc) as well as for ssh for an
298administrator to access the machine.
299
300::
301
302 stack@compute:~$ ifconfig eth0
303 eth0 Link encap:Ethernet HWaddr bc:16:65:20:af:fc
304 inet addr:10.0.0.3
305
306eth1 is manually configured at boot to not have an IP address.
307Consult your operating system documentation for the appropriate
308technique. For Ubuntu, the contents of `/etc/network/interfaces`
309contains:
310
311::
312
313 auto eth1
314 iface eth1 inet manual
315 up ifconfig $IFACE 0.0.0.0 up
316 down ifconfig $IFACE 0.0.0.0 down
317
318The second physical interface, eth1 is added to a bridge (in this case
319named br-ex), which is used to forward network traffic from guest VMs.
320
321::
322
323 stack@compute:~$ sudo ovs-vsctl add-br br-ex
324 stack@compute:~$ sudo ovs-vsctl add-port br-ex eth1
325 stack@compute:~$ sudo ovs-vsctl show
326 9a25c837-32ab-45f6-b9f2-1dd888abcf0f
327 Bridge br-ex
328 Port br-ex
329 Interface br-ex
330 type: internal
331 Port phy-br-ex
332 Interface phy-br-ex
333 type: patch
334 options: {peer=int-br-ex}
335 Port "eth1"
336 Interface "eth1"
337
Sean M. Collins34296012014-10-27 11:57:20 -0400338
339Service Configuration
340---------------------
341
342**Control Node**
343
344In this example, the control node will run the majority of the
Shilla Saebi2ed09d82015-04-21 15:02:13 -0400345OpenStack API and management services (keystone, glance,
346nova, neutron)
Sean M. Collins34296012014-10-27 11:57:20 -0400347
348
349**Compute Nodes**
350
351In this example, the nodes that will host guest instances will run
Markus Zoellerc30657d2015-11-02 11:27:46 +0100352the ``neutron-openvswitch-agent`` for network connectivity, as well as
353the compute service ``nova-compute``.
Sean M. Collins34296012014-10-27 11:57:20 -0400354
355DevStack Configuration
356----------------------
357
Andreas Scheuring28128e22016-04-14 14:23:53 +0200358.. _ovs-provider-network-controller:
359
Sean M. Collins34296012014-10-27 11:57:20 -0400360The following is a snippet of the DevStack configuration on the
361controller node.
362
363::
364
Sean M. Collins611cab42015-10-09 12:54:32 -0400365 HOST_IP=10.0.0.2
366 SERVICE_HOST=10.0.0.2
367 MYSQL_HOST=10.0.0.2
Sean M. Collins611cab42015-10-09 12:54:32 -0400368 RABBIT_HOST=10.0.0.2
369 GLANCE_HOSTPORT=10.0.0.2:9292
Sean M. Collins34296012014-10-27 11:57:20 -0400370 PUBLIC_INTERFACE=eth1
371
Balagopal7ed812c2016-03-01 04:43:31 +0000372 ADMIN_PASSWORD=secret
373 MYSQL_PASSWORD=secret
374 RABBIT_PASSWORD=secret
375 SERVICE_PASSWORD=secret
Sean M. Collins611cab42015-10-09 12:54:32 -0400376
Sean M. Collins34296012014-10-27 11:57:20 -0400377 ## Neutron options
378 Q_USE_SECGROUP=True
Sean Daguedb48db12016-04-06 08:09:31 -0400379 ENABLE_PROJECT_VLANS=True
380 PROJECT_VLAN_RANGE=3001:4000
Sean M. Collins34296012014-10-27 11:57:20 -0400381 PHYSICAL_NETWORK=default
382 OVS_PHYSICAL_BRIDGE=br-ex
383
384 Q_USE_PROVIDER_NETWORKING=True
Sean M. Collins34296012014-10-27 11:57:20 -0400385
Jan Stodt7eb672d2016-08-24 15:29:06 +0200386 disable_service q-l3
Sean M. Collins34296012014-10-27 11:57:20 -0400387
388 ## Neutron Networking options used to create Neutron Subnets
389
Kevin Benton4bfbc292016-11-15 17:26:05 -0800390 IPV4_ADDRS_SAFE_TO_USE="203.0.113.0/24"
syed ahsan shamim zaidi512be7d2015-10-20 21:20:27 +0000391 NETWORK_GATEWAY=203.0.113.1
Sean M. Collins34296012014-10-27 11:57:20 -0400392 PROVIDER_SUBNET_NAME="provider_net"
393 PROVIDER_NETWORK_TYPE="vlan"
394 SEGMENTATION_ID=2010
rajinirc58a1552016-09-27 17:14:59 -0500395 USE_SUBNETPOOL=False
Sean M. Collins34296012014-10-27 11:57:20 -0400396
Kevin Benton4bfbc292016-11-15 17:26:05 -0800397In this configuration we are defining IPV4_ADDRS_SAFE_TO_USE to be a
Sean M. Collinsd72b8392015-06-18 12:40:09 -0400398publicly routed IPv4 subnet. In this specific instance we are using
399the special TEST-NET-3 subnet defined in `RFC 5737 <http://tools.ietf.org/html/rfc5737>`_,
Kevin Benton4bfbc292016-11-15 17:26:05 -0800400which is used for documentation. In your DevStack setup, IPV4_ADDRS_SAFE_TO_USE
Sean M. Collinsd72b8392015-06-18 12:40:09 -0400401would be a public IP address range that you or your organization has
402allocated to you, so that you could access your instances from the
403public internet.
Sean M. Collins34296012014-10-27 11:57:20 -0400404
John Kasperskibdc0fa82015-11-23 11:56:33 -0600405The following is the DevStack configuration on
Sean M. Collins611cab42015-10-09 12:54:32 -0400406compute node 1.
Sean M. Collins34296012014-10-27 11:57:20 -0400407
408::
409
Sean M. Collins611cab42015-10-09 12:54:32 -0400410 HOST_IP=10.0.0.3
411 SERVICE_HOST=10.0.0.2
412 MYSQL_HOST=10.0.0.2
Sean M. Collins611cab42015-10-09 12:54:32 -0400413 RABBIT_HOST=10.0.0.2
414 GLANCE_HOSTPORT=10.0.0.2:9292
Balagopal7ed812c2016-03-01 04:43:31 +0000415 ADMIN_PASSWORD=secret
416 MYSQL_PASSWORD=secret
417 RABBIT_PASSWORD=secret
418 SERVICE_PASSWORD=secret
Sean M. Collins611cab42015-10-09 12:54:32 -0400419
Sean M. Collins34296012014-10-27 11:57:20 -0400420 # Services that a compute node runs
421 ENABLED_SERVICES=n-cpu,rabbit,q-agt
422
Sean M. Collins2977b302016-01-25 09:10:52 -0500423 ## Open vSwitch provider networking options
Sean M. Collins34296012014-10-27 11:57:20 -0400424 PHYSICAL_NETWORK=default
425 OVS_PHYSICAL_BRIDGE=br-ex
426 PUBLIC_INTERFACE=eth1
427 Q_USE_PROVIDER_NETWORKING=True
Sean M. Collins34296012014-10-27 11:57:20 -0400428
Sean M. Collins611cab42015-10-09 12:54:32 -0400429Compute node 2's configuration will be exactly the same, except
Markus Zoellerc30657d2015-11-02 11:27:46 +0100430``HOST_IP`` will be ``10.0.0.4``
Sean M. Collins611cab42015-10-09 12:54:32 -0400431
Sean M. Collins34296012014-10-27 11:57:20 -0400432When DevStack is configured to use provider networking (via
vsaienkod8942212016-05-13 12:51:30 +0300433``Q_USE_PROVIDER_NETWORKING`` is True) -
Sean M. Collins34296012014-10-27 11:57:20 -0400434DevStack will automatically add the network interface defined in
Markus Zoellerc30657d2015-11-02 11:27:46 +0100435``PUBLIC_INTERFACE`` to the ``OVS_PHYSICAL_BRIDGE``
Sean M. Collins34296012014-10-27 11:57:20 -0400436
437For example, with the above configuration, a bridge is
Markus Zoellerc30657d2015-11-02 11:27:46 +0100438created, named ``br-ex`` which is managed by Open vSwitch, and the
439second interface on the compute node, ``eth1`` is attached to the
Shilla Saebi2ed09d82015-04-21 15:02:13 -0400440bridge, to forward traffic sent by guest VMs.
Sean M. Collins872a2622015-10-06 12:45:06 -0400441
442Miscellaneous Tips
443==================
444
Mike Spreitzer4baa4ce2016-01-26 14:06:17 -0500445Non-Standard MTU on the Physical Network
446----------------------------------------
447
Sean M. Collins087ed522016-03-16 11:53:09 -0400448Neutron by default uses a MTU of 1500 bytes, which is
449the standard MTU for Ethernet.
450
451A different MTU can be specified by adding the following to
452the Neutron section of `local.conf`. For example,
453if you have network equipment that supports jumbo frames, you could
454set the MTU to 9000 bytes by adding the following
Mike Spreitzer4baa4ce2016-01-26 14:06:17 -0500455
456::
Sean M. Collinsf81ae882016-02-01 14:00:20 -0500457
Sean M. Collins087ed522016-03-16 11:53:09 -0400458 [[post-config|/$Q_PLUGIN_CONF_FILE]]
459 global_physnet_mtu = 9000
Mike Spreitzer4baa4ce2016-01-26 14:06:17 -0500460
Sean M. Collins872a2622015-10-06 12:45:06 -0400461
462Disabling Next Generation Firewall Tools
463----------------------------------------
464
465DevStack does not properly operate with modern firewall tools. Specifically
466it will appear as if the guest VM can access the external network via ICMP,
467but UDP and TCP packets will not be delivered to the guest VM. The root cause
468of the issue is that both ufw (Uncomplicated Firewall) and firewalld (Fedora's
469firewall manager) apply firewall rules to all interfaces in the system, rather
470then per-device. One solution to this problem is to revert to iptables
471functionality.
472
473To get a functional firewall configuration for Fedora do the following:
474
475::
476
477 sudo service iptables save
478 sudo systemctl disable firewalld
479 sudo systemctl enable iptables
480 sudo systemctl stop firewalld
481 sudo systemctl start iptables
482
483
484To get a functional firewall configuration for distributions containing ufw,
485disable ufw. Note ufw is generally not enabled by default in Ubuntu. To
486disable ufw if it was enabled, do the following:
487
488::
489
490 sudo service iptables save
491 sudo ufw disable
492
Sean M. Collinsd8aa10e2015-10-09 12:21:30 -0400493Configuring Extension Drivers for the ML2 Plugin
494------------------------------------------------
Sean M. Collins872a2622015-10-06 12:45:06 -0400495
Sean M. Collinsd8aa10e2015-10-09 12:21:30 -0400496Extension drivers for the ML2 plugin are set with the variable
Markus Zoellerc30657d2015-11-02 11:27:46 +0100497``Q_ML2_PLUGIN_EXT_DRIVERS``, and includes the 'port_security' extension
Sean M. Collinsd8aa10e2015-10-09 12:21:30 -0400498by default. If you want to remove all the extension drivers (even
Markus Zoellerc30657d2015-11-02 11:27:46 +0100499'port_security'), set ``Q_ML2_PLUGIN_EXT_DRIVERS`` to blank.
Sean M. Collins872a2622015-10-06 12:45:06 -0400500
Sean M. Collins2977b302016-01-25 09:10:52 -0500501
502Using Linux Bridge instead of Open vSwitch
503------------------------------------------
504
505The configuration for using the Linux Bridge ML2 driver is fairly
506straight forward. The Linux Bridge configuration for DevStack is similar
507to the :ref:`Open vSwitch based single interface <single-interface-ovs>`
508setup, with small modifications for the interface mappings.
509
510
511::
512
513 [[local|localrc]]
514 HOST_IP=172.18.161.6
515 SERVICE_HOST=172.18.161.6
516 MYSQL_HOST=172.18.161.6
517 RABBIT_HOST=172.18.161.6
518 GLANCE_HOSTPORT=172.18.161.6:9292
Balagopal7ed812c2016-03-01 04:43:31 +0000519 ADMIN_PASSWORD=secret
520 DATABASE_PASSWORD=secret
521 RABBIT_PASSWORD=secret
522 SERVICE_PASSWORD=secret
Sean M. Collins2977b302016-01-25 09:10:52 -0500523
Sean M. Collins2977b302016-01-25 09:10:52 -0500524 ## Neutron options
525 Q_USE_SECGROUP=True
526 FLOATING_RANGE="172.18.161.0/24"
Kevin Benton4bfbc292016-11-15 17:26:05 -0800527 IPV4_ADDRS_SAFE_TO_USE="10.0.0.0/24"
Sean M. Collins2977b302016-01-25 09:10:52 -0500528 Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
529 PUBLIC_NETWORK_GATEWAY="172.18.161.1"
Sean M. Collins2977b302016-01-25 09:10:52 -0500530 PUBLIC_INTERFACE=eth0
531
532 Q_USE_PROVIDERNET_FOR_PUBLIC=True
533
534 # Linuxbridge Settings
535 Q_AGENT=linuxbridge
536 LB_PHYSICAL_INTERFACE=eth0
537 PUBLIC_PHYSICAL_NETWORK=default
538 LB_INTERFACE_MAPPINGS=default:eth0
Andreas Scheuring28128e22016-04-14 14:23:53 +0200539
540Using MacVTap instead of Open vSwitch
541------------------------------------------
542
543Security groups are not supported by the MacVTap agent. Due to that, devstack
544configures the NoopFirewall driver on the compute node.
545
546MacVTap agent does not support l3, dhcp and metadata agent. Due to that you can
547chose between the following deployment scenarios:
548
549Single node with provider networks using config drive and external l3, dhcp
550~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
551This scenario applies, if l3 and dhcp services are provided externally, or if
552you do not require them.
553
554
555::
556
557 [[local|localrc]]
558 HOST_IP=10.0.0.2
559 SERVICE_HOST=10.0.0.2
560 MYSQL_HOST=10.0.0.2
561 RABBIT_HOST=10.0.0.2
562 ADMIN_PASSWORD=secret
563 MYSQL_PASSWORD=secret
564 RABBIT_PASSWORD=secret
565 SERVICE_PASSWORD=secret
566
567 Q_ML2_PLUGIN_MECHANISM_DRIVERS=macvtap
568 Q_USE_PROVIDER_NETWORKING=True
569
Andreas Scheuring28128e22016-04-14 14:23:53 +0200570 enable_plugin neutron git://git.openstack.org/openstack/neutron
Andreas Scheuring28128e22016-04-14 14:23:53 +0200571
572 ## MacVTap agent options
573 Q_AGENT=macvtap
574 PHYSICAL_NETWORK=default
575
Kevin Benton4bfbc292016-11-15 17:26:05 -0800576 IPV4_ADDRS_SAFE_TO_USE="203.0.113.0/24"
Andreas Scheuring28128e22016-04-14 14:23:53 +0200577 NETWORK_GATEWAY=203.0.113.1
578 PROVIDER_SUBNET_NAME="provider_net"
579 PROVIDER_NETWORK_TYPE="vlan"
580 SEGMENTATION_ID=2010
rajinirc58a1552016-09-27 17:14:59 -0500581 USE_SUBNETPOOL=False
Andreas Scheuring28128e22016-04-14 14:23:53 +0200582
583 [[post-config|/$Q_PLUGIN_CONF_FILE]]
584 [macvtap]
585 physical_interface_mappings = $PHYSICAL_NETWORK:eth1
586
587 [[post-config|$NOVA_CONF]]
588 force_config_drive = True
589
590
591Multi node with MacVTap compute node
592~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
593This scenario applies, if you require OpenStack provided l3, dhcp or metadata
594services. Those are hosted on a separate controller and network node, running
595some other l2 agent technology (in this example Open vSwitch). This node needs
596to be configured for VLAN tenant networks.
597
598For OVS, a similar configuration like described in the
599:ref:`OVS Provider Network <ovs-provider-network-controller>` section can be
Hironori Shiinaaa7ec812016-09-28 20:21:57 +0900600used. Just add the following line to this local.conf, which also loads
Andreas Scheuring28128e22016-04-14 14:23:53 +0200601the MacVTap mechanism driver:
602
603::
604
605 [[local|localrc]]
606 ...
607 Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,linuxbridge,macvtap
608 ...
609
610For the MacVTap compute node, use this local.conf:
611
612::
613
614 HOST_IP=10.0.0.3
615 SERVICE_HOST=10.0.0.2
616 MYSQL_HOST=10.0.0.2
617 RABBIT_HOST=10.0.0.2
618 ADMIN_PASSWORD=secret
619 MYSQL_PASSWORD=secret
620 RABBIT_PASSWORD=secret
621 SERVICE_PASSWORD=secret
622
623 # Services that a compute node runs
624 disable_all_services
625 enable_plugin neutron git://git.openstack.org/openstack/neutron
626 ENABLED_SERVICES+=n-cpu,q-agt
627
628 ## MacVTap agent options
629 Q_AGENT=macvtap
630 PHYSICAL_NETWORK=default
631
632 [[post-config|/$Q_PLUGIN_CONF_FILE]]
633 [macvtap]
634 physical_interface_mappings = $PHYSICAL_NETWORK:eth1