The networking-ovn
repository includes integration with DevStack that
enables creation of a simple Open Virtual Network (OVN) development and test
environment. This document discusses what is required for manual installation
or integration into a production OpenStack deployment tool of conventional
architectures that include the following types of nodes:
Controller - Runs OpenStack control plane services such as REST APIs and databases.
Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents for the Networking service. Some agents optional. Usually provides connectivity between provider (public) and project (private) networks via NAT and floating IP addresses.
Note
Some tools deploy these services on controller nodes.
Compute - Runs the hypervisor and layer-2 agent for the Networking service.
Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
it experimental. The Networking service integration for OVN uses an
independent package, typically networking-ovn
.
Building OVS from source automatically installs OVN. For deployment tools
using distribution packages, the openvswitch-ovn
package for RHEL/CentOS
and compatible distributions automatically installs openvswitch
as a
dependency. Ubuntu/Debian includes ovn-central
, ovn-host
,
ovn-docker
, and ovn-common
packages that pull in the appropriate Open
vSwitch dependencies as needed.
A python-networking-ovn
RPM may be obtained for Fedora or CentOS from
the RDO project. A package based on the master
branch of
networking-ovn
can be found at https://trunk.rdoproject.org/.
Fedora and CentOS RPM builds of OVS and OVN from the master
branch of
ovs
can be found in this COPR repository:
https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
Each controller node runs the OVS service (including dependent services such
as ovsdb-server
) and the ovn-northd
service. However, only a single
instance of the ovsdb-server
and ovn-northd
services can operate in
a deployment. However, deployment tools can implement active/passive
high-availability using a management tool that monitors service health
and automatically starts these services on another node after failure of the
primary node. See the FAQ for more information.
Install the openvswitch-ovn
and networking-ovn
packages.
Start the OVS service. The central OVS service starts the ovsdb-server
service that manages OVN databases.
Using the systemd unit:
# systemctl start openvswitch
Using the ovs-ctl
script:
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
Configure the ovsdb-server
component. By default, the ovsdb-server
service only permits local access to databases via Unix socket. However,
OVN services on compute nodes require access to these databases.
Permit remote database access.
# ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:IP_ADDRESS
Replace IP_ADDRESS
with the IP address of the management network
interface on the controller node.
Note
Permit remote access to TCP port 6640 on any host firewall.
Start the ovn-northd
service.
Using the systemd unit:
# systemctl start ovn-northd
Using the ovn-ctl
script:
# /usr/share/openvswitch/scripts/ovn-ctl start_northd
Options for start_northd:
# /usr/share/openvswitch/scripts/ovn-ctl start_northd --help
# ...
# DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock"
# DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid"
# DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock"
# DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid"
# ...
Configure the Networking server component. The Networking service
implements OVN as an ML2 driver. Edit the /etc/neutron/neutron.conf
file:
Enable the ML2 core plug-in.
[DEFAULT]
...
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
Enable the OVN layer-3 service.
[DEFAULT]
...
service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
Configure the ML2 plug-in. Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file:
Configure the OVN mechanism driver, network type drivers, self-service (tenant) network types, and enable the port security extension.
[ml2]
...
mechanism_drivers = ovn
type_drivers = local,flat,vlan,geneve
tenant_network_types = geneve
extension_drivers = port_security
overlay_ip_version = 4
Note
To enable VLAN self-service networks, add vlan
to the
tenant_network_types
option. The first network type
in the list becomes the default self-service network type.
To use IPv6 for all overlay (tunnel) network endpoints,
set the overlay_ip_version
option to 6
.
Configure the Geneve ID range and maximum header size. The IP version
overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added
to the maximum header size based on the ML2 overlay_ip_version
option.
[ml2_type_geneve]
...
vni_ranges = 1:65536
max_header_size = 38
Note
The Networking service uses the vni_ranges
option to allocate
network segments. However, OVN ignores the actual values. Thus, the ID
range only determines the quantity of Geneve networks in the
environment. For example, a range of 5001:6000
defines a maximum
of 1000 Geneve networks.
Optionally, enable support for VLAN provider and self-service networks on one or more physical networks. If you specify only the physical network, only administrative (privileged) users can manage VLAN networks. Additionally specifying a VLAN ID range for a physical network enables regular (non-privileged) users to manage VLAN networks. The Networking service allocates the VLAN ID for each self-service network using the VLAN ID range for the physical network.
[ml2_type_vlan]
...
network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
Replace PHYSICAL_NETWORK
with the physical network name and
optionally define the minimum and maximum VLAN IDs. Use a comma
to separate each physical network.
For example, to enable support for administrative VLAN networks
on the physnet1
network and self-service VLAN networks on
the physnet2
network using VLAN IDs 1001 to 2000:
network_vlan_ranges = physnet1,physnet2:1001:2000
Enable security groups.
[securitygroup]
...
enable_security_group = true
Note
The firewall_driver
option under [securitygroup]
is ignored
since the OVN ML2 driver itself handles security groups.
Configure OVS database access and L3 scheduler
[ovn]
...
ovn_nb_connection = tcp:IP_ADDRESS:6641
ovn_sb_connection = tcp:IP_ADDRESS:6642
ovn_l3_scheduler = OVN_L3_SCHEDULER
Note
Replace IP_ADDRESS
with the IP address of the controller node that
runs the ovsdb-server
service. Replace OVN_L3_SCHEDULER
with
leastloaded
if you want the scheduler to select a compute node with
the least number of gateway ports or chance
if you want the
scheduler to randomly select a compute node from the available list of
compute nodes.
Start the neutron-server
service.
Configure the chassis to host gateway routers for external connectivity.
Set ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table’s external_ids column. Then if this chassis has proper bridge mappings, it will be selected for scheduling gateway routers.
# ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw
Deployments using OVN native layer-3 and DHCP services do not require conventional network nodes because connectivity to external networks (including VTEP gateways) and routing occurs on compute nodes.
Each compute node runs the OVS and ovn-controller
services. The
ovn-controller
service replaces the conventional OVS layer-2 agent.
Install the openvswitch-ovn
and networking-ovn
packages.
Start the OVS service.
Using the systemd unit:
# systemctl start openvswitch
Using the ovs-ctl
script:
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
Configure the OVS service.
Use OVS databases on the controller node.
# ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
Replace IP_ADDRESS
with the IP address of the controller node
that runs the ovsdb-server
service.
Enable one or more overlay network protocols. At a minimum, OVN requires
enabling the geneve
protocol. Deployments using VTEP gateways should
also enable the vxlan
protocol.
# ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
Note
Deployments without VTEP gateways can safely enable both protocols.
Configure the overlay network local endpoint IP address.
# ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
Replace IP_ADDRESS
with the IP address of the overlay network
interface on the compute node.
Start the ovn-controller
service.
Using the systemd unit:
# systemctl start ovn-controller
Using the ovn-ctl
script:
# /usr/share/openvswitch/scripts/ovn-ctl start_controller
Each compute node should contain an ovn-controller
instance.
# ovn-sbctl show
<output>
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.