Open Virtual Network (OVN) is an awesome open source project which adds virtual network abstractions to Open vSwitch such as L2 and L3 overlays as well as managing connectivity to physical networks.

OVN has been integrated with OpenStack through networking-ovn which implements a Neutron ML2 driver to realize network resources such as networks, subnets, ports or routers. However, if you don’t want to go through the process of deploying OpenStack, this post provides a quick tutorial to get you started with OVN. (If you feel like it, you can use Packstack to deploy OpenStack RDO which by default will use OVN as the networking backend).

1#!/bin/bash
2 
3# Disable SELinux
4sudo setenforce 0
5sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
6 
7# Install pre-requisites to compile Open vSwitch
8sudo yum group install "Development Tools" -y
9sudo yum install python-devel python-six -y
10 
11GIT_REPO=${GIT_REPO:-https://github.com/openvswitch/ovs}
12GIT_BRANCH=${GIT_BRANCH:-master}
13 
14# Clone ovs repo
15git clone $GIT_REPO
16cd ovs
17 
18if [[ "z$GIT_BRANCH" != "z" ]]; then
19 git checkout $GIT_BRANCH
20fi
21 
22# Compile the sources and install OVS
23./boot.sh
24CFLAGS="-O0 -g" ./configure --prefix=/usr
25make -j5 V=0 install
26sudo make install
27 
28# Start both OVS and OVN services
29sudo /usr/share/openvswitch/scripts/ovs-ctl start --system-id="ovn"
30sudo /usr/share/openvswitch/scripts/ovn-ctl start_ovsdb --db-nb-create-insecure-remote=yes --db-sb-create-insecure-remote=yes
31sudo /usr/share/openvswitch/scripts/ovn-ctl start_northd
32sudo /usr/share/openvswitch/scripts/ovn-ctl start_controller
33 
34# Configure OVN in OVSDB
35sudo ovs-vsctl set open . external-ids:ovn-bridge=br-int
36sudo ovs-vsctl set open . external-ids:ovn-remote=unix:/usr/var/run/openvswitch/ovnsb_db.sock
37sudo ovs-vsctl set open . external-ids:ovn-encap-ip=127.0.0.1
38sudo ovs-vsctl set open . external-ids:ovn-encap-type=geneve

After this, we have a functional OVN system which we can interact with by using the ovn-nbctl tool.

As an example, let’s create a very simple topology consisting on one Logical Switch and attach two Logical Ports to it:

1# ovn-nbctl ls-add network1
2# ovn-nbctl lsp-add network1 vm1
3# ovn-nbctl lsp-add network1 vm2
4# ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 192.168.50.21"
5# ovn-nbctl lsp-set-addresses vm2 "40:44:00:00:00:02 192.168.50.22"
6# ovn-nbctl show
7switch 6f2921aa-e679-462a-ae2b-b581cd958b82 (network1)
8port vm2
9addresses: ["40:44:00:00:00:02 192.168.50.22"]
10port vm1
11addresses: ["40:44:00:00:00:01 192.168.50.21"]

What now? Can we communicate vm1 and vm2 somehow?
At this point, we just defined our topology from a logical point of view but those ports are not bound to any hypervisor (Chassis in OVN terminology).

1# ovn-sbctl show
2Chassis ovn
3 hostname: ovnhost
4 Encap geneve
5 ip: "127.0.0.1"
6 options: {csum="true"}
7 
8# ovn-nbctl lsp-get-up vm1
9down
10# ovn-nbctl lsp-get-up vm2
11down

For simplicity, let’s bind both ports “vm1” and “vm2” to our chassis simulating that we’re booting two virtual machines. If we would use libvirt or virtualbox to spawn the VMs, their integration with OVS would add the VIF ID to the external_ids:iface-id on the OVN bridge. Check this out for more information.

1[root@ovnhost vagrant]# ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal -- set Interface vm1 external_ids:iface-id=vm1
2[root@ovnhost vagrant]# ovs-vsctl add-port br-int vm2 -- set Interface vm2 type=internal -- set Interface vm2 external_ids:iface-id=vm2
3[root@ovnhost vagrant]# ovn-sbctl show
4Chassis ovn
5hostname: ovnhost
6Encap geneve
7ip: "127.0.0.1"
8options: {csum="true"}
9Port_Binding "vm2"
10Port_Binding "vm1"
11[root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm1
12up
13[root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm2
14up

At this point we have both vm1 and vm2 bound to our chassis. This means that OVN installed the necessary flows in the OVS bridge for them. In order to test this communication, let’s create a namespace for each port and configure a network interface with the assigned IP and MAC addresses:

1ip netns add vm1
2ip link set vm1 netns vm1
3ip netns exec vm1 ip link set vm1 address 40:44:00:00:00:01
4ip netns exec vm1 ip addr add 192.168.50.21/24 dev vm1
5ip netns exec vm1 ip link set vm1 up
6 
7ip netns add vm2
8ip link set vm2 netns vm2
9ip netns exec vm2 ip link set vm2 address 40:44:00:00:00:02
10ip netns exec vm2 ip addr add 192.168.50.22/24 dev vm2
11ip netns exec vm2 ip link set vm2 up

After this, we should be able to communicate vm1 and vm2 via the OVN Logical Switch:

1[root@ovnhost vagrant]# ip netns list
2vm2 (id: 1)
3vm1 (id: 0)
4 
5[root@ovnhost vagrant]# ip netns exec vm1 ip addr
61: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
7link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
816: vm1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
9link/ether 40:44:00:00:00:01 brd ff:ff:ff:ff:ff:ff
10inet 192.168.50.21/24 scope global vm1
11valid_lft forever preferred_lft forever
12inet6 fe80::4244:ff:fe00:1/64 scope link
13valid_lft forever preferred_lft forever
14 
15[root@ovnhost vagrant]# ip netns exec vm2 ip addr
161: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
17link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
1817: vm2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
19link/ether 40:44:00:00:00:02 brd ff:ff:ff:ff:ff:ff
20inet 192.168.50.22/24 scope global vm2
21valid_lft forever preferred_lft forever
22inet6 fe80::4244:ff:fe00:2/64 scope link
23valid_lft forever preferred_lft forever
24 
25[root@ovnhost vagrant]# ip netns exec vm1 ping -c2 192.168.50.22
26PING 192.168.50.22 (192.168.50.22) 56(84) bytes of data.
2764 bytes from 192.168.50.22: icmp_seq=1 ttl=64 time=0.326 ms
2864 bytes from 192.168.50.22: icmp_seq=2 ttl=64 time=0.022 ms
29 
30--- 192.168.50.22 ping statistics ---
312 packets transmitted, 2 received, 0% packet loss, time 1000ms
32rtt min/avg/max/mdev = 0.022/0.174/0.326/0.152 ms
33 
34[root@ovnhost vagrant]# ip netns exec vm2 ping -c2 192.168.50.21
35PING 192.168.50.21 (192.168.50.21) 56(84) bytes of data.
3664 bytes from 192.168.50.21: icmp_seq=1 ttl=64 time=0.025 ms
3764 bytes from 192.168.50.21: icmp_seq=2 ttl=64 time=0.021 ms
38 
39--- 192.168.50.21 ping statistics ---
402 packets transmitted, 2 received, 0% packet loss, time 1000ms
41rtt min/avg/max/mdev = 0.021/0.023/0.025/0.002 ms

In essence, what this post is describing is the OVN sandbox that comes out of the box with the OVS source code and presented by Russell Bryant . However, the idea behind this tutorial is to serve as a base to setup a simple OVN system as a playground or debugging environment without the burden of deploying OpenStack or any other CMS. In coming articles, we’ll extend the deployment to have more than one node and do more advanced stuff.