Taggeneve

OVN Cluster Interconnection

A new feature has been recently introduced in OVN that allows multiple clusters to be interconnected at L3 level (here's a link to the series of patches). This can be useful for scenarios with multiple availability zones (or physical regions) or simply to allow better scaling by having independent control planes yet allowing connectivity between workloads in separate zones.

Simplifying things, logical routers on each cluster can be connected via transit overlay networks. The interconnection layer is responsible for creating the transit switches in the IC database that will become visible to the connected clusters. Each cluster can then connect their logical routers to the transit switches. More information can be found in the ovn-architecture manpage.

I created a vagrant setup to test it out and become a bit familiar with it. All you need to do to recreate it is cloning and running 'vagrant up' inside the ovn-interconnection folder:

https://github.com/danalsan/vagrants/tree/master/ovn-interconnection

This will deploy 7 CentOS machines (300MB of RAM each) with two separate OVN clusters (west & east) and the interconnection services. The layout is described in the image below:

Once the services are up and running, a few resources will be created on each cluster and the interconnection services will be configured with a transit switch between them:

Let's see, for example, the logical topology of the east availability zone, where the transit switch ts1 is listed along with the port in the west remote zone:

[root@central-east ~]# ovn-nbctl show
switch c850599c-263c-431b-b67f-13f4eab7a2d1 (ts1)
    port lsp-ts1-router_west
        type: remote
        addresses: ["aa:aa:aa:aa:aa:02 169.254.100.2/24"]
    port lsp-ts1-router_east
        type: router
        router-port: lrp-router_east-ts1
switch 8361d0e1-b23e-40a6-bd78-ea79b5717d7b (net_east)
    port net_east-router_east
        type: router
        router-port: router_east-net_east
    port vm1
        addresses: ["40:44:00:00:00:01 192.168.1.11"]
router b27d180d-669c-4ca8-ac95-82a822da2730 (router_east)
    port lrp-router_east-ts1
        mac: "aa:aa:aa:aa:aa:01"
        networks: ["169.254.100.1/24"]
        gateway chassis: [gw_east]
    port router_east-net_east
        mac: "40:44:00:00:00:04"
        networks: ["192.168.1.1/24"]

As for the Southbound database, we can see the gateway port for each router. In this setup I only have one gateway node but, as any other distributed gateway port in OVN, it could be scheduled in multiple nodes providing HA

[root@central-east ~]# ovn-sbctl show
Chassis worker_east
    hostname: worker-east
    Encap geneve
        ip: "192.168.50.100"
        options: {csum="true"}
    Port_Binding vm1
Chassis gw_east
    hostname: gw-east
    Encap geneve
        ip: "192.168.50.102"
        options: {csum="true"}
    Port_Binding cr-lrp-router_east-ts1
Chassis gw_west
    hostname: gw-west
    Encap geneve
        ip: "192.168.50.103"
        options: {csum="true"}
    Port_Binding lsp-ts1-router_west

If we query the interconnection databases, we will see the transit switch in the NB and the gateway ports in each zone:

[root@central-ic ~]# ovn-ic-nbctl show
Transit_Switch ts1

[root@central-ic ~]# ovn-ic-sbctl show
availability-zone east
    gateway gw_east
        hostname: gw-east
        type: geneve
            ip: 192.168.50.102
        port lsp-ts1-router_east
            transit switch: ts1
            address: ["aa:aa:aa:aa:aa:01 169.254.100.1/24"]
availability-zone west
    gateway gw_west
        hostname: gw-west
        type: geneve
            ip: 192.168.50.103
        port lsp-ts1-router_west
            transit switch: ts1
            address: ["aa:aa:aa:aa:aa:02 169.254.100.2/24"]

With this topology, traffic flowing from vm1 to vm2 shall flow from gw-east to gw-west through a Geneve tunnel. If we list the ports in each gateway we should be able to see the tunnel ports. Needless to say, gateways have to be mutually reachable so that the transit overlay network can be established:

[root@gw-west ~]# ovs-vsctl show
6386b867-a3c2-4888-8709-dacd6e2a7ea5
    Bridge br-int
        fail_mode: secure
        Port ovn-gw_eas-0
            Interface ovn-gw_eas-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="192.168.50.102"}

Now, when vm1 pings vm2, the traffic flow should be like:

(vm1) worker_east ==== gw_east ==== gw_west ==== worker_west (vm2).

Let's see it via ovn-trace tool:

[root@central-east vagrant]# ovn-trace  --ovs --friendly-names --ct=new net_east  'inport == "vm1" && eth.src == 40:44:00:00:00:01 && eth.dst == 40:44:00:00:00:04 && ip4.src == 192.168.1.11 && ip4.dst == 192.168.2.12 && ip.ttl == 64 && icmp4.type == 8'


ingress(dp="net_east", inport="vm1")
...
egress(dp="net_east", inport="vm1", outport="net_east-router_east")
...
ingress(dp="router_east", inport="router_east-net_east")
...
egress(dp="router_east", inport="router_east-net_east", outport="lrp-router_east-ts1")
...
ingress(dp="ts1", inport="lsp-ts1-router_east")
...
egress(dp="ts1", inport="lsp-ts1-router_east", outport="lsp-ts1-router_west")
 9. ls_out_port_sec_l2 (ovn-northd.c:4543): outport == "lsp-ts1-router_west", priority 50, uuid c354da11
    output;
    /* output to "lsp-ts1-router_west", type "remote" */

Now let's capture Geneve traffic on both gateways while a ping between both VMs is running:

[root@gw-east ~]# tcpdump -i genev_sys_6081 -vvnee icmp
tcpdump: listening on genev_sys_6081, link-type EN10MB (Ethernet), capture size 262144 bytes
10:43:35.355772 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 11379, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 40, length 64
10:43:35.356077 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 11379, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 40, length 64
10:43:35.356442 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 42610, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 40, length 64
10:43:35.356734 40:44:00:00:00:04 > 40:44:00:00:00:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 42610, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 40, length 64


[root@gw-west ~]# tcpdump -i genev_sys_6081 -vvnee icmp
tcpdump: listening on genev_sys_6081, link-type EN10MB (Ethernet), capture size 262144 bytes
10:43:29.169532 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 8875, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 34, length 64
10:43:29.170058 40:44:00:00:00:10 > 40:44:00:00:00:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 8875, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 34, length 64
10:43:29.170308 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 38667, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 34, length 64
10:43:29.170476 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 38667, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 34, length 64

You can observe that the ICMP traffic flows between the transit switch ports (aa:aa:aa:aa:aa:02 <> aa:aa:aa:aa:aa:01) traversing both zones.

Also, as the packet has gone through two routers (router_east and router_west), the TTL at the destination has been decremented twice (from 64 to 62):

[root@worker-west ~]# ip net e vm2 tcpdump -i any icmp -vvne
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:49:32.491674  In 40:44:00:00:00:10 ethertype IPv4 (0x0800), length 100: (tos 0x0, ttl 62, id 57504, offset 0, flags [DF], proto ICMP (1), length 84)

This is a really great feature that opens a lot of possibilities for cluster interconnection and scaling. However, it has to be taken into account that it requires another layer of management that handles isolation (multitenancy) and avoids IP overlapping across the connected availability zones.

OVN - Geneve Encapsulation

In the last post we created a Logical Switch with two ports residing on different hypervisors. Communication between those two ports took place over the tunnel interface using Geneve encapsulation. Let's now take a closer look at this overlay traffic.

Without diving too much into the packet processing in OVN, we need to know that each Logical Datapath (Logical Switch / Logical Router) has an ingress and an egress pipeline. Whenever a packet comes in, the ingress pipeline is executed and after the output action, the egress pipeline will run to deliver the packet to its destination. More info here: http://docs.openvswitch.org/en/latest/faq/ovn/#ovn

In our scenario, when we ping from VM1 to VM2, the ingress pipeline of each ICMP packet runs on Worker1 (where VM1 is bound to) and the packet is pushed to the tunnel interface to Worker2 (where VM2 resides). When Worker2 receives the packet on its physical interface, the egress pipeline of the Logical Switch (network1) is executed to deliver the packet to VM2. But ... How does OVN know where the packet comes from and which Logical Datapath should process it? This is where the metadata in the Geneve headers comes in.

Let's get back to our setup and ping from VM1 to VM2 and capture traffic on the physical interface (eth1) of Worker2:

[root@worker2 ~]# sudo tcpdump -i eth1 -vvvnnexx

17:02:13.403229 52:54:00:13:e0:a2 > 52:54:00:ac:67:5b, ethertype IPv4 (0x0800), length 156: (tos 0x0, ttl 64, id 63920, offset 0, flags [DF], proto UDP (17), length 142)
    192.168.50.100.7549 > 192.168.50.101.6081: [bad udp cksum 0xe6a5 -> 0x7177!] Geneve, Flags [C], vni 0x1, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00010002]
        40:44:00:00:00:01 > 40:44:00:00:00:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 41968, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.0.11 > 192.168.0.12: ICMP echo request, id 1251, seq 6897, length 64
        0x0000:  5254 00ac 675b 5254 0013 e0a2 0800 4500
        0x0010:  008e f9b0 4000 4011 5a94 c0a8 3264 c0a8
        0x0020:  3265 1d7d 17c1 007a e6a5 0240 6558 0000
        0x0030:  0100 0102 8001 0001 0002 4044 0000 0002
        0x0040:  4044 0000 0001 0800 4500 0054 a3f0 4000
        0x0050:  4001 1551 c0a8 000b c0a8 000c 0800 c67b
        0x0060:  04e3 1af1 94d9 6e5c 0000 0000 41a7 0e00
        0x0070:  0000 0000 1011 1213 1415 1617 1819 1a1b
        0x0080:  1c1d 1e1f 2021 2223 2425 2627 2829 2a2b
        0x0090:  2c2d 2e2f 3031 3233 3435 3637

17:02:13.403268 52:54:00:ac:67:5b > 52:54:00:13:e0:a2, ethertype IPv4 (0x0800), length 156: (tos 0x0, ttl 64, id 46181, offset 0, flags [DF], proto UDP (17), length 142)
    192.168.50.101.9683 > 192.168.50.100.6081: [bad udp cksum 0xe6a5 -> 0x6921!] Geneve, Flags [C], vni 0x1, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00020001]
        40:44:00:00:00:02 > 40:44:00:00:00:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 16422, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.0.12 > 192.168.0.11: ICMP echo reply, id 1251, seq 6897, length 64
        0x0000:  5254 0013 e0a2 5254 00ac 675b 0800 4500
        0x0010:  008e b465 4000 4011 9fdf c0a8 3265 c0a8
        0x0020:  3264 25d3 17c1 007a e6a5 0240 6558 0000
        0x0030:  0100 0102 8001 0002 0001 4044 0000 0001
        0x0040:  4044 0000 0002 0800 4500 0054 4026 0000
        0x0050:  4001 b91b c0a8 000c c0a8 000b 0000 ce7b
        0x0060:  04e3 1af1 94d9 6e5c 0000 0000 41a7 0e00
        0x0070:  0000 0000 1011 1213 1415 1617 1819 1a1b
        0x0080:  1c1d 1e1f 2021 2223 2425 2627 2829 2a2b
        0x0090:  2c2d 2e2f 3031 3233 3435 3637

Let's now decode the ICMP request packet (I'm using this tool):

ICMP request inside the Geneve tunnel

Metadata

 

In the ovn-architecture(7) document, you can check how the Metadata is used in OVN in the Tunnel Encapsulations section. In short, OVN encodes the following information in the Geneve packets:

  • Logical Datapath (switch/router) identifier (24 bits) - Geneve VNI
  • Ingress and Egress port identifiers - Option with class 0x0102 and type 0x80 with 32 bits of data:
         1       15          16
       +---+------------+-----------+
       |rsv|ingress port|egress port|
       +---+------------+-----------+
         0

Back to our example: VNI = 0x000001 and Option Data = 00010002, so from the above:

Logical Datapath = 1   Ingress Port = 1   Egress Port = 2

Let's take a look at SB database contents to see if they match what we expect:

[root@central ~]# ovn-sbctl get Datapath_Binding network1 tunnel-key
1

[root@central ~]# ovn-sbctl get Port_Binding vm1 tunnel-key
1

[root@central ~]# ovn-sbctl get Port_Binding vm2 tunnel-key
2

We can see that the Logical Datapath belongs to network1, that the ingress port is vm1 and that the output port is vm2 which makes sense as we're analyzing the ICMP request from VM1 to VM2. 

By the time this packet hits Worker2 hypervisor, OVN has all the information to process the packet on the right pipeline and deliver the port to VM2 without having to run the ingress pipeline again.

What if we don't use any encapsulation?

This is technically possible in OVN and there's such scenarios like in the case where we're managing a physical network directly and won't use any kind of overlay technology. In this case, our ICMP request packet would've been pushed directly to the network and when Worker2 receives the packet, OVN needs to figure out (based on the IP/MAC addresses) which ingress pipeline to execute (twice, as it was also executed by Worker1) before it can go to the egress pipeline and deliver the packet to VM2.