Open Virtual Network (OVN) is an awesome open source project which adds virtual network abstractions to Open vSwitch such as L2 and L3 overlays as well as managing connectivity to physical networks.

OVN has been integrated with OpenStack through networking-ovn which implements a Neutron ML2 driver to realize network resources such as networks, subnets, ports or routers. However, if you don’t want to go through the process of deploying OpenStack, this post provides a quick tutorial to get you started with OVN. (If you feel like it, you can use Packstack to deploy OpenStack RDO which by default will use OVN as the networking backend).

#!/bin/bash

# Disable SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

# Install pre-requisites to compile Open vSwitch
sudo yum group install "Development Tools" -y
sudo yum install python-devel python-six -y

GIT_REPO=${GIT_REPO:-https://github.com/openvswitch/ovs}
GIT_BRANCH=${GIT_BRANCH:-master}

# Clone ovs repo
git clone $GIT_REPO
cd ovs

if [[ "z$GIT_BRANCH" != "z" ]]; then
 git checkout $GIT_BRANCH
fi

# Compile the sources and install OVS
./boot.sh
CFLAGS="-O0 -g" ./configure --prefix=/usr
make -j5 V=0 install
sudo make install

# Start both OVS and OVN services
sudo /usr/share/openvswitch/scripts/ovs-ctl start --system-id="ovn"
sudo /usr/share/openvswitch/scripts/ovn-ctl start_ovsdb --db-nb-create-insecure-remote=yes --db-sb-create-insecure-remote=yes
sudo /usr/share/openvswitch/scripts/ovn-ctl start_northd
sudo /usr/share/openvswitch/scripts/ovn-ctl start_controller

# Configure OVN in OVSDB
sudo ovs-vsctl set open . external-ids:ovn-bridge=br-int
sudo ovs-vsctl set open . external-ids:ovn-remote=unix:/usr/var/run/openvswitch/ovnsb_db.sock
sudo ovs-vsctl set open . external-ids:ovn-encap-ip=127.0.0.1
sudo ovs-vsctl set open . external-ids:ovn-encap-type=geneve

After this, we have a functional OVN system which we can interact with by using the ovn-nbctl tool.

As an example, let’s create a very simple topology consisting on one Logical Switch and attach two Logical Ports to it:

# ovn-nbctl ls-add network1
# ovn-nbctl lsp-add network1 vm1
# ovn-nbctl lsp-add network1 vm2
# ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 192.168.50.21"
# ovn-nbctl lsp-set-addresses vm2 "40:44:00:00:00:02 192.168.50.22"
# ovn-nbctl show
switch 6f2921aa-e679-462a-ae2b-b581cd958b82 (network1)
port vm2
addresses: ["40:44:00:00:00:02 192.168.50.22"]
port vm1
addresses: ["40:44:00:00:00:01 192.168.50.21"]

What now? Can we communicate vm1 and vm2 somehow?
At this point, we just defined our topology from a logical point of view but those ports are not bound to any hypervisor (Chassis in OVN terminology).

# ovn-sbctl show
Chassis ovn
 hostname: ovnhost
 Encap geneve
 ip: "127.0.0.1"
 options: {csum="true"}

# ovn-nbctl lsp-get-up vm1
down
# ovn-nbctl lsp-get-up vm2
down

For simplicity, let’s bind both ports “vm1” and “vm2” to our chassis simulating that we’re booting two virtual machines. If we would use libvirt or virtualbox to spawn the VMs, their integration with OVS would add the VIF ID to the external_ids:iface-id on the OVN bridge. Check this out for more information.


[root@ovnhost vagrant]# ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal -- set Interface vm1 external_ids:iface-id=vm1
[root@ovnhost vagrant]# ovs-vsctl add-port br-int vm2 -- set Interface vm2 type=internal -- set Interface vm2 external_ids:iface-id=vm2
[root@ovnhost vagrant]# ovn-sbctl show
Chassis ovn
hostname: ovnhost
Encap geneve
ip: "127.0.0.1"
options: {csum="true"}
Port_Binding "vm2"
Port_Binding "vm1"
[root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm1
up
[root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm2
up

At this point we have both vm1 and vm2 bound to our chassis. This means that OVN installed the necessary flows in the OVS bridge for them. In order to test this communication, let’s create a namespace for each port and configure a network interface with the assigned IP and MAC addresses:


ip netns add vm1
ip link set vm1 netns vm1
ip netns exec vm1 ip link set vm1 address 40:44:00:00:00:01
ip netns exec vm1 ip addr add 192.168.50.21/24 dev vm1
ip netns exec vm1 ip link set vm1 up

ip netns add vm2
ip link set vm2 netns vm2
ip netns exec vm2 ip link set vm2 address 40:44:00:00:00:02
ip netns exec vm2 ip addr add 192.168.50.22/24 dev vm2
ip netns exec vm2 ip link set vm2 up

After this, we should be able to communicate vm1 and vm2 via the OVN Logical Switch:


[root@ovnhost vagrant]# ip netns list
vm2 (id: 1)
vm1 (id: 0)

[root@ovnhost vagrant]# ip netns exec vm1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
16: vm1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 40:44:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.21/24 scope global vm1
valid_lft forever preferred_lft forever
inet6 fe80::4244:ff:fe00:1/64 scope link
valid_lft forever preferred_lft forever

[root@ovnhost vagrant]# ip netns exec vm2 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
17: vm2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 40:44:00:00:00:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.22/24 scope global vm2
valid_lft forever preferred_lft forever
inet6 fe80::4244:ff:fe00:2/64 scope link
valid_lft forever preferred_lft forever

[root@ovnhost vagrant]# ip netns exec vm1 ping -c2 192.168.50.22
PING 192.168.50.22 (192.168.50.22) 56(84) bytes of data.
64 bytes from 192.168.50.22: icmp_seq=1 ttl=64 time=0.326 ms
64 bytes from 192.168.50.22: icmp_seq=2 ttl=64 time=0.022 ms

--- 192.168.50.22 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.022/0.174/0.326/0.152 ms

[root@ovnhost vagrant]# ip netns exec vm2 ping -c2 192.168.50.21
PING 192.168.50.21 (192.168.50.21) 56(84) bytes of data.
64 bytes from 192.168.50.21: icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from 192.168.50.21: icmp_seq=2 ttl=64 time=0.021 ms

--- 192.168.50.21 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.021/0.023/0.025/0.002 ms

In essence, what this post is describing is the OVN sandbox that comes out of the box with the OVS source code and presented by Russell Bryant . However, the idea behind this tutorial is to serve as a base to setup a simple OVN system as a playground or debugging environment without the burden of deploying OpenStack or any other CMS. In coming articles, we’ll extend the deployment to have more than one node and do more advanced stuff.