Open Virtual Network (OVN) is an awesome open source project which adds virtual network abstractions to Open vSwitch such as L2 and L3 overlays as well as managing connectivity to physical networks.
OVN has been integrated with OpenStack through networking-ovn which implements a Neutron ML2 driver to realize network resources such as networks, subnets, ports or routers. However, if you don’t want to go through the process of deploying OpenStack, this post provides a quick tutorial to get you started with OVN. (If you feel like it, you can use Packstack to deploy OpenStack RDO which by default will use OVN as the networking backend).
5 | sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config |
8 | sudo yum group install "Development Tools" -y |
9 | sudo yum install python-devel python-six -y |
12 | GIT_BRANCH=${GIT_BRANCH:-master} |
18 | if [[ "z$GIT_BRANCH" != "z" ]]; then |
19 | git checkout $GIT_BRANCH |
24 | CFLAGS= "-O0 -g" ./configure --prefix=/usr |
29 | sudo /usr/share/openvswitch/scripts/ovs-ctl start --system- id = "ovn" |
30 | sudo /usr/share/openvswitch/scripts/ovn-ctl start_ovsdb --db-nb-create-insecure-remote= yes --db-sb-create-insecure-remote= yes |
31 | sudo /usr/share/openvswitch/scripts/ovn-ctl start_northd |
32 | sudo /usr/share/openvswitch/scripts/ovn-ctl start_controller |
35 | sudo ovs-vsctl set open . external-ids:ovn-bridge=br-int |
36 | sudo ovs-vsctl set open . external-ids:ovn-remote=unix:/usr/var/run/openvswitch/ovnsb_db.sock |
37 | sudo ovs-vsctl set open . external-ids:ovn-encap-ip=127.0.0.1 |
38 | sudo ovs-vsctl set open . external-ids:ovn-encap- type =geneve |
After this, we have a functional OVN system which we can interact with by using the ovn-nbctl tool.
As an example, let’s create a very simple topology consisting on one Logical Switch and attach two Logical Ports to it:
1 | # ovn-nbctl ls-add network1 |
2 | # ovn-nbctl lsp-add network1 vm1 |
3 | # ovn-nbctl lsp-add network1 vm2 |
4 | # ovn-nbctl lsp-set-addresses vm1 "40:44:00:00:00:01 192.168.50.21" |
5 | # ovn-nbctl lsp-set-addresses vm2 "40:44:00:00:00:02 192.168.50.22" |
7 | switch 6f2921aa-e679-462a-ae2b-b581cd958b82 (network1) |
9 | addresses: ["40:44:00:00:00:02 192.168.50.22"] |
11 | addresses: ["40:44:00:00:00:01 192.168.50.21"] |
What now? Can we communicate vm1 and vm2 somehow?
At this point, we just defined our topology from a logical point of view but those ports are not bound to any hypervisor (Chassis in OVN terminology).
8 | # ovn-nbctl lsp-get-up vm1 |
10 | # ovn-nbctl lsp-get-up vm2 |
For simplicity, let’s bind both ports “vm1” and “vm2” to our chassis simulating that we’re booting two virtual machines. If we would use libvirt or virtualbox to spawn the VMs, their integration with OVS would add the VIF ID to the external_ids:iface-id on the OVN bridge. Check this out for more information.
1 | [root@ovnhost vagrant]# ovs-vsctl add-port br-int vm1 -- set Interface vm1 type=internal -- set Interface vm1 external_ids:iface-id=vm1 |
2 | [root@ovnhost vagrant]# ovs-vsctl add-port br-int vm2 -- set Interface vm2 type=internal -- set Interface vm2 external_ids:iface-id=vm2 |
3 | [root@ovnhost vagrant]# ovn-sbctl show |
11 | [root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm1 |
13 | [root@ovnhost vagrant]# ovn-nbctl lsp-get-up vm2 |
At this point we have both vm1 and vm2 bound to our chassis. This means that OVN installed the necessary flows in the OVS bridge for them. In order to test this communication, let’s create a namespace for each port and configure a network interface with the assigned IP and MAC addresses:
2 | ip link set vm1 netns vm1 |
3 | ip netns exec vm1 ip link set vm1 address 40:44:00:00:00:01 |
4 | ip netns exec vm1 ip addr add 192.168.50.21/24 dev vm1 |
5 | ip netns exec vm1 ip link set vm1 up |
8 | ip link set vm2 netns vm2 |
9 | ip netns exec vm2 ip link set vm2 address 40:44:00:00:00:02 |
10 | ip netns exec vm2 ip addr add 192.168.50.22/24 dev vm2 |
11 | ip netns exec vm2 ip link set vm2 up |
After this, we should be able to communicate vm1 and vm2 via the OVN Logical Switch:
1 | [root@ovnhost vagrant]# ip netns list |
5 | [root@ovnhost vagrant]# ip netns exec vm1 ip addr |
6 | 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 |
7 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
8 | 16: vm1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 |
9 | link/ether 40:44:00:00:00:01 brd ff:ff:ff:ff:ff:ff |
10 | inet 192.168.50.21/24 scope global vm1 |
11 | valid_lft forever preferred_lft forever |
12 | inet6 fe80::4244:ff:fe00:1/64 scope link |
13 | valid_lft forever preferred_lft forever |
15 | [root@ovnhost vagrant]# ip netns exec vm2 ip addr |
16 | 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 |
17 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
18 | 17: vm2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 |
19 | link/ether 40:44:00:00:00:02 brd ff:ff:ff:ff:ff:ff |
20 | inet 192.168.50.22/24 scope global vm2 |
21 | valid_lft forever preferred_lft forever |
22 | inet6 fe80::4244:ff:fe00:2/64 scope link |
23 | valid_lft forever preferred_lft forever |
25 | [root@ovnhost vagrant]# ip netns exec vm1 ping -c2 192.168.50.22 |
26 | PING 192.168.50.22 (192.168.50.22) 56(84) bytes of data. |
27 | 64 bytes from 192.168.50.22: icmp_seq=1 ttl=64 time=0.326 ms |
28 | 64 bytes from 192.168.50.22: icmp_seq=2 ttl=64 time=0.022 ms |
30 | --- 192.168.50.22 ping statistics --- |
31 | 2 packets transmitted, 2 received, 0% packet loss, time 1000ms |
32 | rtt min/avg/max/mdev = 0.022/0.174/0.326/0.152 ms |
34 | [root@ovnhost vagrant]# ip netns exec vm2 ping -c2 192.168.50.21 |
35 | PING 192.168.50.21 (192.168.50.21) 56(84) bytes of data. |
36 | 64 bytes from 192.168.50.21: icmp_seq=1 ttl=64 time=0.025 ms |
37 | 64 bytes from 192.168.50.21: icmp_seq=2 ttl=64 time=0.021 ms |
39 | --- 192.168.50.21 ping statistics --- |
40 | 2 packets transmitted, 2 received, 0% packet loss, time 1000ms |
41 | rtt min/avg/max/mdev = 0.021/0.023/0.025/0.002 ms |
In essence, what this post is describing is the OVN sandbox that comes out of the box with the OVS source code and presented by Russell Bryant . However, the idea behind this tutorial is to serve as a base to setup a simple OVN system as a playground or debugging environment without the burden of deploying OpenStack or any other CMS. In coming articles, we’ll extend the deployment to have more than one node and do more advanced stuff.