Tag: geneve

Learning OVN and BGP with Claude AI

Recently, OVN has introduced BGP support and I did not have the time to follow the development so I decided to use the help of AI (Claude Code CLI) to learn about the new feature. The path I chose here is:

  1. Run one of the existing BGP tests  (ovn multinode bgp unnumbered)
  2. Use AI (claude-sonnet-4.5 model) to help me understand what the test does
  3. Identify the parts of the codebase that implement some of the bits in the test
  4. Do a small modification to the test to do something new

1. Run the BGP test from the OVN sources

To run this test, I need first to deploy an ovn-fake-multinode setup with 4 chassis and 4 gateways. I did it just by following the instructions in the README.md file.

$ sudo -E CHASSIS_COUNT=4 GW_COUNT=4 ./ovn_cluster.sh start

After deploying the cluster, you’ll have multiple containers running but only the ovn-central-az1, ovn-gw-1, and ovn-gw-2 are relevant to this test. This cluster will have some OVN resources but the test cleans them up upon start.

I did not want to investigate much how these containers are wired so I asked Claude to read the sources and produce a diagram for me. It also helped me to render it into a PNG file using ImageMagick :p

Essentially, it will create two OVS bridges in my host to connect eth1 on each container for the underlay and eth2 for the dataplane. The eth0 interface is connected to the podman network for management.

Now we run the test and stop it before the clean up. This you can do by adding a check false right before the cleanup, and then executing the test like this:

OVS_PAUSE_TEST=1 make check-multinode TESTSUITEFLAGS="-k 'ovn multinode bgp unnumbered' -v"

Once the test stops, you can see that it’s completed successfully and it was able to ping some IP address, and the containers are running with the configuration and resources created by the test.

multinode.at:2959: waiting until m_as ovn-gw-1 ip netns exec frr-ns ping -W 1 -c 1 172.16.10.2...
PING 172.16.10.2 (172.16.10.2) 56(84) bytes of data.
64 bytes from 172.16.10.2: icmp_seq=1 ttl=62 time=2.32 ms

--- 172.16.10.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.322/2.322/2.322/0.000 ms
multinode.at:2959: wait succeeded immediately
multinode.at:2960: waiting until m_as ovn-gw-2 ip netns exec frr-ns ip route | grep -q 'ext1'...
multinode.at:2960: wait succeeded immediately
multinode.at:2961: waiting until m_as ovn-gw-2 ip netns exec frr-ns ping -W 1 -c 1 172.16.10.2...
PING 172.16.10.2 (172.16.10.2) 56(84) bytes of data.
64 bytes from 172.16.10.2: icmp_seq=1 ttl=62 time=1.77 ms

--- 172.16.10.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.773/1.773/1.773/0.000 ms
multinode.at:2961: wait succeeded immediately
false
./ovn-macros.at:856: "$@"
./ovn-macros.at:856: exit code was 1, expected 0
=====================================================
Set following environment variable to use various ovs utilities
export OVS_RUNDIR=/root/ovn/tests/multinode-testsuite.dir/17
Press ENTER to continue:


$ podman exec ovn-central-az1 ovn-nbctl show | grep -E '^(router|switch)'
switch 05e453e9-f5b2-47b9-9eb6-10ef6ea8a08c (ls-ovn-gw-2-ext0)
switch 8a12c075-6367-46b0-ac5a-88346450fc60 (ls-ovn-gw-1-ext0)
switch 3cc96979-1aab-4cd3-a84b-6f5f36bd4a42 (ls-guest-ovn-gw-1)
switch e24ad9c9-ef80-4686-96eb-88ef47d9bc01 (ls-guest-ovn-gw-2)
switch 9a279d1b-0929-4386-a704-fb9faaf6dfa6 (ls-join)
router e0a37965-c854-4c17-a4bb-4401d61c48a6 (lr-guest)
router 1f81175b-1168-4956-b7a8-64c85c07af4f (lr-ovn-gw-1-ext0)
router ce2161ff-ea11-4fb0-954b-75f0cf26f3d7 (lr-ovn-gw-2-ext0)

2. Use Claude AI to understand what the test does

At this point I read the test and I can see that there’s a fake VM with a Floating IP (172.16.10.2) that’s reachable through the simulated ToR switches. The ToRs learn the route via BGP, but I’m struggling to understand the exact topology and configuration because the test uses several layers of macros that call other macros.

Claude to the rescue!

I used Claude to help me:

  • Trace through the macro definitions to understand what topology gets created
  • Identify which components run where (containers, namespaces, VRFs)
  • Map out the BGP peering relationships and AS numbers
  • Understand how FRR is configured on both the ToRs and OVN gateways
  • Figure out why there are two separate FRR instances running on the same container The key insight Claude helped me understand was that the test simulates both the ToR switches and the OVN gateways within the same container nodes, using:
  • Network namespaces (frr-ns) to isolate the ToR’s network stack
  • Two separate FRR instances – one for the ToR (in frr-ns) and one for the OVN gateway (in default namespace, using VRFs)
  • BGP unnumbered peering using IPv6 link-local addresses between them

As a bonus, I asked Claude to draw a couple of diagrams for me to better understand the topology that this test creates:

 

Since I stopped the test before the cleanup, I told Claude how to access the remote lab and asked to inspect the live environment via SSH commands.

NOTE: Before I got the answer below I had been asking Claude things about the macros that the test uses and it’s been unpacking and reading a lot of the OVN code that I checked out locally. I re-ran this on a fresh session and it failed to figure out things like who creates the VRFs where the blackhole routes are installed, so it required a few more follow-up questions until I was able to find the right answers.

> I would like you to inspect the live deployment and show how the routes get advertised (eg. listing the different OVN tables, FRR configuration, etcetera). You can access the live environment by running 'ssh ovnlab' directly and using 'sudo'.


⏺ Perfect! Let me trace through exactly how the Floating IP (172.16.10.2) gets advertised from OVN through to the external ToR switches via BGP.

  ---  Step 1: NAT Configuration in OVN Northbound

  First, the NAT rule is configured on the guest router:

  $ podman exec ovn-central-az1 ovn-nbctl list NAT
  _uuid               : 4305c6db-b431-400d-b3e8-248d920394fc
  external_ip         : "172.16.10.2"
  logical_ip          : "192.168.10.10"
  type                : dnat_and_snat
  gateway_port        : lrp-guest-join-dgp
  options             : {add_route="true", ...}

  Key points:
  - external_ip: 172.16.10.2 (the Floating IP)
  - logical_ip: 192.168.10.10 (the internal VM IP)
  - gateway_port: Associated with the Distributed Gateway Port (DGP)
  - add_route=true: Tells OVN to create a route for this NAT entry

  Step 2: Dynamic Routing and VRF Configuration

  The gateway routers are configured with dynamic routing and VRF settings:

  $ podman exec ovn-central-az1 ovn-nbctl get logical_router lr-ovn-gw-1-ext0 options
  {chassis=ovn-gw-1, dynamic-routing="true", dynamic-routing-vrf-id="10"}

  $ podman exec ovn-central-az1 ovn-nbctl get logical_router_port lrp-ovn-gw-1-ext0 options
  {dynamic-routing-maintain-vrf="true", dynamic-routing-redistribute=nat,
   routing-protocol-redirect=lsp-ovn-gw-1-ext0-bgp, routing-protocols="BGP,BFD"}

  This configuration tells OVN:
  - Enable dynamic routing on lr-ovn-gw-1-ext0
  - Use VRF ID 10 (will create VRF named ovnvrf10)
  - dynamic-routing-maintain-vrf=true on the router port - This is the key setting that tells ovn-controller to create and maintain the VRF in the kernel
  - Redistribute NAT routes via the router port
  - Redirect BGP protocol traffic to the lsp-ovn-gw-1-ext0-bgp port

  When ovn-controller on the ovn-gw-1 chassis sees the dynamic-routing-maintain-vrf=true option, it automatically creates the VRF in the kernel:

  $ podman exec ovn-gw-1 ip link show type vrf
  11: ovnvrf10: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
      link/ether 7a:7c:6f:2c:0d:22 brd ff:ff:ff:ff:ff:ff

  The VRF name is constructed as ovnvrf + dynamic-routing-vrf-id (10 in this case).

  Similarly, on ovn-gw-2, the VRF ID 20 creates ovnvrf20.

  Step 3: Advertised Routes in OVN Southbound

  OVN northd processes the NAT configuration and creates advertised route entries:

  $ podman exec ovn-central-az1 ovn-sbctl list Advertised_Route
  _uuid               : 360a360a-4ff4-4e87-888f-80e7c3b55e0a
  ip_prefix           : "172.16.10.2"
  logical_port        : lrp-ovn-gw-1-to-join
  ...

  _uuid               : d3a74c47-2ad9-471e-8b5e-e186ab774f21
  ip_prefix           : "172.16.10.2"
  logical_port        : lrp-ovn-gw-2-to-join
  ...

  Both gateway routers have advertised route entries for 172.16.10.2 because:
  - Each gateway router (lr-ovn-gw-1-ext0, lr-ovn-gw-2-ext0) has dynamic-routing-redistribute=nat configured on its port connecting to the join switch
  - The NAT entry from lr-guest gets redistributed to both gateway routers
  - Each gateway then advertises the FIP to its respective ToR (ovn-ext-1 and ovn-ext-2)
  - This provides connectivity from both external ASNs (ASN 4200000100 and ASN 4200000200) to reach the FIP

  The DGP location only matters for actual traffic forwarding, not for route advertisement. Both ToRs can advertise reachability to 172.16.10.2, and when traffic arrives at either gateway, OVN's overlay handles forwarding it to whichever chassis actually hosts the DGP.

  Step 4: Blackhole Routes in the Kernel

  OVN-controller on each gateway node installs blackhole routes in the VRF:

  $ podman exec ovn-gw-1 ip route show vrf ovnvrf10
  blackhole 172.16.10.2 proto 84 metric 1000

  $ podman exec ovn-gw-2 ip route show vrf ovnvrf20
  blackhole 172.16.10.2 proto 84 metric 1000

  These blackhole routes serve as markers for FRR to detect and advertise via BGP. They don't actually route traffic (that's handled by OVN flows), but FRR's redistribute kernel configuration picks them up.

  Step 5: FRR BGP Advertisement (OVN Gateway Side)

  FRR on the OVN gateway sees the kernel route and advertises it to its BGP neighbor:

  $ podman exec ovn-gw-1 vtysh --vty_socket /run/frr/ -c "show bgp vrf ovnvrf10 ipv4"
  BGP table version is 1, local router ID is 14.14.14.14, vrf id 11
  Default local pref 100, local AS 4210000000

       Network          Next Hop            Metric LocPrf Weight Path
   *>  172.16.10.2/32   0.0.0.0               1000         32768 ?

  The route shows:
  - Next Hop: 0.0.0.0 - locally originated (from kernel redistribution)
  - Weight: 32768 - default weight for local routes
  - Path: ? - incomplete origin (redistributed from kernel)

  Step 6: BGP Peering (Unnumbered)

  The OVN gateway and ToR are peered using BGP unnumbered (IPv6 link-local addresses):

  $ podman exec ovn-gw-1 vtysh --vty_socket /run/frr/ -c "show bgp vrf ovnvrf10 neighbors ext0-bgp"
  BGP neighbor on ext0-bgp: fe80::a8ed:baff:feb6:585e, remote AS 4200000100, local AS 4210000000
  BGP state = Established, up for 1d06h12m
  ...
  Prefix statistics:
    0 accepted, 1 sent prefixes

  Key details:
  - Neighbor: fe80::a8ed:baff:feb6:585e (IPv6 link-local - unnumbered)
  - Remote AS: 4200000100 (the ToR's ASN)
  - Local AS: 4210000000 (OVN gateway's ASN)
  - Sent prefixes: 1 (the 172.16.10.2/32 route)

  Step 7: FRR BGP Learning (ToR Side)

  The ToR's FRR instance receives and accepts the route:

  $ podman exec ovn-gw-1 vtysh --vty_socket /run/frr/frr-ns/ -c "show bgp ipv4"
  BGP table version is 1, local router ID is 41.41.41.41, vrf id 0
  Default local pref 100, local AS 4200000100

       Network          Next Hop            Metric LocPrf Weight Path
   *>  172.16.10.2/32   ext1                  1000             0 4210000000 ?

  The ToR now knows:
  - Network: 172.16.10.2/32
  - Next Hop: ext1 (the interface toward the OVN gateway)
  - Metric: 1000 (inherited from the advertised route)
  - ASN Path: 4210000000 (learned from the OVN gateway's ASN)

  Summary: Complete Advertisement Chain

  1. NAT Config (NB)
     ↓
  2. Dynamic Routing + VRF Config (NB)
     ↓
  3. VRF Creation (Kernel) ← ovn-controller sees dynamic-routing-maintain-vrf=true
     ↓
  4. Advertised Routes (SB) ← northd processing
     ↓
  5. Blackhole Routes (Kernel VRF) ← ovn-controller
     ↓
  6. BGP Table (FRR Gateway) ← redistribute kernel
     ↓
  7. BGP Peering (IPv6 LLA)
     ↓
  8. BGP Table (FRR ToR) ← received from peer

  This shows the complete journey of how a NAT entry in OVN becomes a BGP route advertised to external infrastructure, enabling seamless integration between OVN's virtual networking and physical network routing.

3. Identify the parts of the codebase that implement some of the bits in the test

At this point I believe I understand what the test does and how the BGP implementation works on the surface. In order for me to get a deeper understanding of certain parts of the code I want to understand how and when the VRF is created by ovn-controller. Since Claude has access to the OVN sources and my live environment, it should be able to give me good pointers to it 🙂

> I want to get some details (source code level) about how and when ovn-controller creates the VRF where it installs the blackhole routes.


⏺ Let me trace through the source code to show you exactly how and when ovn-controller creates the VRF and installs the blackhole routes.

⏺ Read(ovn/controller/route-exchange.c)
  ⎿  Read 100 lines

⏺ Search(pattern: "vrf.*create|create.*vrf", path: "ovn/controller", output_mode: "content")
  ⎿  Found 5 lines (ctrl+o to expand)

⏺ Read(ovn/controller/route-exchange.c)
  ⎿  Read 150 lines

⏺ Read(ovn/controller/route-exchange-netlink.c)
  ⎿  Read 80 lines

⏺ Now let me check where the advertise_datapath_entry gets populated with the maintain_vrf flag:

⏺ Search(pattern: "maintain_vrf.*=|->maintain_vrf", path: "ovn/controller", output_mode: "content")
  ⎿  Found 2 lines (ctrl+o to expand)

⏺ Read(ovn/controller/route.c)
  ⎿  Read 50 lines

⏺ Perfect! Now let me check how advertised routes get added:

⏺ Search(pattern: "Advertised_Route|advertised.*route", path: "ovn/controller/route.c", output_mode: "content")
  ⎿  Found 3 lines (ctrl+o to expand)

⏺ Read(ovn/controller/route.c)
  ⎿  Read 80 lines

⏺ Perfect! Now I have the complete picture. Let me create a detailed source code-level explanation:

  ---
  Source Code Deep Dive: VRF Creation and Blackhole Route Installation

  Overview

  The VRF creation and blackhole route installation happens in ovn-controller through a multi-step process coordinated between several source files. Let me trace through the exact code path.

  Step 1: Checking for dynamic-routing-maintain-vrf Option

  File: controller/route.c (lines 206-209)

  ad->maintain_vrf |=
      smap_get_bool(&repb->options,
                    "dynamic-routing-maintain-vrf",
                    false);

  What happens here:
  - ovn-controller iterates through all router port bindings (repb) that have dynamic routing enabled
  - For each port, it checks if the dynamic-routing-maintain-vrf option is set to true
  - If found, it sets the maintain_vrf flag in the advertise_datapath_entry structure
  - This flag tells ovn-controller that it is responsible for creating and managing the VRF

  Step 2: Determining VRF Name

  File: controller/route.c (lines 211-225)

  const char *vrf_name = smap_get(&repb->options,
                                  "dynamic-routing-vrf-name");
  if (vrf_name && strlen(vrf_name) >= IFNAMSIZ) {
      // Warn and ignore if name is too long
      vrf_name = NULL;
  }
  if (vrf_name) {
      memcpy(ad->vrf_name, vrf_name, strlen(vrf_name) + 1);
  } else {
      snprintf(ad->vrf_name, sizeof ad->vrf_name, "ovnvrf%"PRIu32,
               route_get_table_id(ad->db));
  }

  What happens here:
  - First checks if a custom VRF name is specified via dynamic-routing-vrf-name option
  - If no custom name is provided, constructs the default name as ovnvrf + VRF ID
  - For example, with dynamic-routing-vrf-id=10, it creates ovnvrf10

  Step 3: Creating the VRF

  File: controller/route-exchange.c (lines 265-277)

  if (ad->maintain_vrf) {
      if (!sset_contains(&old_maintained_vrfs, ad->vrf_name)) {
          error = re_nl_create_vrf(ad->vrf_name, table_id);
          if (error && error != EEXIST) {
              VLOG_WARN_RL(&rl,
                           "Unable to create VRF %s for datapath "
                           UUID_FMT": %s.", ad->vrf_name,
                           UUID_ARGS(&ad->db->header_.uuid),
                           ovs_strerror(error));
              SET_ROUTE_EXCHANGE_NL_STATUS(error);
              continue;
          }
      }
      sset_add(&_maintained_vrfs, ad->vrf_name);
  }

  What happens here:
  - During the route_exchange_run() function execution (called on every ovn-controller iteration)
  - Checks if the VRF was already created in a previous iteration (by checking old_maintained_vrfs)
  - If not, calls re_nl_create_vrf() to create it via netlink
  - Adds the VRF name to _maintained_vrfs set to track it
  - If VRF already exists (EEXIST error), silently continues (this is normal)

  Step 4: Netlink VRF Creation

  File: controller/route-exchange-netlink.c (lines 42-77)

  int
  re_nl_create_vrf(const char *ifname, uint32_t table_id)
  {
      if (!TABLE_ID_VALID(table_id)) {
          // ... error handling
          return EINVAL;
      }

      // Prepare netlink message
      struct ofpbuf request;
      uint8_t request_stub[NETNL_REQ_BUFFER_SIZE];
      ofpbuf_use_stub(&request, request_stub, sizeof(request_stub));

      // Create RTM_NEWLINK message with VRF attributes
      nl_msg_put_nlmsghdr(&request, 0, RTM_NEWLINK,
                          NLM_F_REQUEST | NLM_F_ACK | NLM_F_CREATE | NLM_F_EXCL);
      ifinfo = ofpbuf_put_zeros(&request, sizeof *ifinfo);
      nl_msg_put_string(&request, IFLA_IFNAME, ifname);  // VRF name

      ifinfo->ifi_change = ifinfo->ifi_flags = IFF_UP;   // Bring it UP
      linkinfo_off = nl_msg_start_nested(&request, IFLA_LINKINFO);
      nl_msg_put_string(&request, IFLA_INFO_KIND, "vrf");  // Type: VRF
      infodata_off = nl_msg_start_nested(&request, IFLA_INFO_DATA);
      nl_msg_put_u32(&request, IFLA_VRF_TABLE, table_id);  // VRF table ID
      nl_msg_end_nested(&request, infodata_off);
      nl_msg_end_nested(&request, linkinfo_off);

      // Send to kernel via netlink
      err = nl_transact(NETLINK_ROUTE, &request, NULL);

      ofpbuf_uninit(&request);
      return err;
  }

  What happens here:
  - Constructs a netlink RTM_NEWLINK message to create a new network interface
  - Sets the interface type to "vrf"
  - Associates it with the routing table ID (from dynamic-routing-vrf-id)
  - Sets the interface to UP state immediately
  - Sends the netlink message to the kernel's routing subsystem
  - The kernel creates the VRF device (e.g., ovnvrf10)

  Step 5: Reading Advertised Routes from Southbound DB

  File: controller/route.c (lines 267-295)

  const struct sbrec_advertised_route *route;
  SBREC_ADVERTISED_ROUTE_TABLE_FOR_EACH (route,
                                         r_ctx_in->advertised_route_table) {
      struct advertise_datapath_entry *ad =
          advertise_datapath_find(r_ctx_out->announce_routes,
                                  route->datapath);
      if (!ad) {
          continue;
      }

      struct in6_addr prefix;
      unsigned int plen;
      if (!ip46_parse_cidr(route->ip_prefix, &prefix, &plen)) {
          // ... error handling
          continue;
      }

      if (!lport_is_local(r_ctx_in->sbrec_port_binding_by_name,
                          r_ctx_in->chassis,
                          route->logical_port->logical_port)) {
          // Skip routes for ports not on this chassis
          continue;
      }

      // Add route to the advertise_datapath_entry
      struct advertise_route_entry *ar = xmalloc(sizeof(*ar));
      ar->addr = prefix;
      ar->plen = plen;
      ar->priority = priority;
      hmap_insert(&ad->routes, &ar->node,
                  advertise_route_hash(&prefix, plen));
  }

  What happens here:
  - Reads all Advertised_Route entries from the Southbound database
  - These are created by northd when it processes NAT rules with redistribution enabled
  - Filters to only routes whose logical port is bound to this chassis
  - Builds a hash map of routes to be installed in the kernel

  Step 6: Installing Blackhole Routes

  File: controller/route-exchange-netlink.c (lines 98-121)

  static int
  modify_route(uint32_t type, uint32_t flags_arg, uint32_t table_id,
               const struct in6_addr *dst, unsigned int plen,
               unsigned int priority)
  {
      uint32_t flags = NLM_F_REQUEST | NLM_F_ACK;
      bool is_ipv4 = IN6_IS_ADDR_V4MAPPED(dst);
      struct rtmsg *rt;

      flags |= flags_arg;

      struct ofpbuf request;
      // ... buffer initialization

      nl_msg_put_nlmsghdr(&request, 0, type, flags);
      rt = ofpbuf_put_zeros(&request, sizeof *rt);
      rt->rtm_family = is_ipv4 ? AF_INET : AF_INET6;
      rt->rtm_table = RT_TABLE_UNSPEC;
      rt->rtm_protocol = RTPROT_OVN;      // Mark as OVN-managed (proto 84)
      rt->rtm_type = RTN_BLACKHOLE;       // ← BLACKHOLE route type
      rt->rtm_scope = RT_SCOPE_UNIVERSE;

  What happens here:
  - For each advertised route, constructs a netlink RTM_NEWROUTE message
  - Sets route type to RTN_BLACKHOLE - this is the key!
  - Sets protocol to RTPROT_OVN (protocol 84) so they can be identified later
  - Sends to kernel, which installs the route in the specified VRF table

  Step 7: FRR Picks Up the Routes

  Once the blackhole routes are installed in the kernel VRF, FRR's redistribute kernel configuration (in the VRF BGP context) automatically detects them and advertises via BGP.

  Key Timing and Lifecycle

  1. When VRF is created: On first ovn-controller iteration after seeing dynamic-routing-maintain-vrf=true
  2. When routes are installed: Every time ovn-controller runs and processes Advertised_Route entries
  3. Route updates: Synchronized continuously - stale routes are removed, new ones added
  4. VRF cleanup: When dynamic-routing-maintain-vrf is removed or chassis loses binding, VRF is deleted

  Why Blackhole Routes?

  The routes are marked as RTN_BLACKHOLE because:
  - They are markers for FRR to advertise, not for actual packet forwarding
  - OVN handles the actual packet forwarding through OpenFlow rules
  - If a packet somehow reaches the kernel and matches one of these routes, it will be dropped (blackholed)
  - This prevents routing loops and ensures all traffic goes through OVN's datapath

Impressive, right?

Let’s now try to do something with the environment!

3. Modify the environment to advertise internal IPs via BGP

One potential modification to this test is to advertise the internal IP address of the guest VM (192.168.10.10) and ping it from the ToR. Instead of changing the code of the test I’ll modify the live environment to advertise the route to the VM and use Claude to handhold me.

First, verify that the VM is not reachable.

# podman exec ovn-gw-2 ip netns exec frr-ns ping -c 1 192.168.10.10
ping: connect: Network is unreachable

Now, since the ToRs are on different ASNs and are not connected,  we will only be able to access the VM from the ToR where the DGP  is bound. In our environment, the active gateway is on the gw-2 node. On a typical datacenter with a spine-and-leaf topology, the ToRs will be in the same AS and will be connected through the spine so we could reach the VM from anywhere in the network.

> I would like the lr-guest to advertise the route to 192.168.10.10, how can i enable that option?

⏺ To make lr-guest advertise its connected routes, you need to configure dynamic routing on lr-guest itself and on its router ports. Here's how:

  Step 1: Enable dynamic routing on lr-guest

  podman exec ovn-central-az1 ovn-nbctl set logical_router lr-guest \
      options:dynamic-routing=true \
      options:dynamic-routing-vrf-id=100

  Step 2: Configure the router ports to redistribute connected routes

  You need to set dynamic-routing-redistribute on the ports where you want to advertise routes from:

  # Advertise from the port connected to ls-guest-ovn-gw-1
  podman exec ovn-central-az1 ovn-nbctl set logical_router_port lrp-guest-ls-guest-ovn-gw-1 \
      options:dynamic-routing-redistribute=connected

  # Advertise from the port connected to ls-guest-ovn-gw-2
  podman exec ovn-central-az1 ovn-nbctl set logical_router_port lrp-guest-ls-guest-ovn-gw-2 \
      options:dynamic-routing-redistribute=connected

  However, there's a critical limitation here:

  - lr-guest is only instantiated on the chassis where the DGP is active (ovn-gw-2)
  - The VRF (ovnvrf100) will only be created on ovn-gw-2
  - Routes from lr-guest won't propagate to the gateway routers (lr-ovn-gw-1-ext0, lr-ovn-gw-2-ext0) automatically

  To actually make this work end-to-end, you would still need the static routes approach on the gateway routers, because OVN doesn't automatically propagate routes between different logical routers.

  The dynamic-routing-redistribute=connected on lr-guest would be useful if you had FRR running directly in the lr-guest VRF (ovnvrf100), but in this test topology, the BGP peering is happening in the gateway router VRFs (ovnvrf10, ovnvrf20), not in lr-guest's VRF.

  So the answer is: you can enable it, but it won't achieve what you want without also adding the static routes on the gateway routers.

This looks sane but still it does not seem that we’re telling OVN to create the VRF 100 and also FRR won’t be reading any routes from that VRF. Claude has missed that part, let’s talk about it 🙂

> if i understood you correctly earlier, we still need to add the maintain-vrf option to the LRP so that the vrf100 gets created

⏺ You're absolutely right! We need to add the maintain-vrf option to trigger vrf100 creation. Let me provide the complete configuration:

  # 1. Set maintain-vrf=true on the DGP to create vrf100
  # This is the critical step that triggers VRF creation in ovn-controller
  podman exec -it ovn-central-az1 \
    ovn-nbctl set logical_router_port lrp-guest-join-dgp \
    options:dynamic-routing-maintain-vrf=true

  # 2. Add static routes on gateway routers pointing to the DGP
  podman exec ovn-central-az1 ovn-nbctl lr-route-add lr-ovn-gw-2-ext0 \
      192.168.10.0/24 fe80::216:6ff:fe12:f00d lrp-ovn-gw-2-to-join


> FRR on ovn-gw-2 does not know anything about the vrf100. Can you help me adding the config to import this VRF?

⏺ You are right. Here's the configuration to add the import:

  podman exec -i ovn-gw-2 vtysh <<'EOF'
  configure
    router bgp 4210000000 vrf ovnvrf100
      bgp router-id 100.100.100.100
      no bgp ebgp-requires-policy
      address-family ipv4 unicast
        redistribute kernel
      exit-address-family
    exit
    router bgp 4210000000 vrf ovnvrf20
      address-family ipv4 unicast
        import vrf ovnvrf100
      exit-address-family
    exit
    write memory
  EOF

  This will:
  1. Create the BGP instance in ovnvrf100 that redistributes kernel routes (the blackhole route for 192.168.10.0/24)
  2. Add import vrf ovnvrf100 to the existing ovnvrf20 BGP configuration, so routes from vrf100 get leaked into vrf20 and advertised to the ToR


> Can you explain in detail what we have just done to reach 192.168.10.10 from the ToR?


⏺ Let me explain in detail how this works by tracing through the entire flow:

  1. OVN Configuration Creates the Foundation

  You've configured lr-guest with:
  - options:dynamic-routing=true
  - options:dynamic-routing-vrf-id=100
  - lrp-guest-join-dgp has options:dynamic-routing-maintain-vrf=true

  2. VRF Creation in Kernel

  On ovn-gw-2 (where the DGP is bound), ovn-controller sees the dynamic-routing-maintain-vrf=true option and calls re_nl_create_vrf() to create ovnvrf100 in the kernel with routing table ID 100.

  You can verify this:
  ssh ovnlab sudo podman exec ovn-gw-2 ip link show ovnvrf100
  ssh ovnlab sudo podman exec ovn-gw-2 ip route show vrf ovnvrf100

  3. Blackhole Route Installation

  When lr-guest has routes to advertise (like the connected network 192.168.10.0/24), ovn-controller queries the Advertised_Route table in the Southbound DB and installs blackhole routes in vrf100:

  ssh ovnlab sudo podman exec ovn-gw-2 ip route show vrf ovnvrf100 table 100
  # You should see something like:
  # blackhole 192.168.10.0/24 proto 84

  The route is marked as:
  - RTN_BLACKHOLE - doesn't actually route traffic, just a marker
  - RTPROT_OVN (protocol 84) - identifies it as an OVN-managed route

  This happens in route-exchange-netlink.c:98-121.

  4. FRR Sees the Kernel Route in vrf100

  When you configure:
  router bgp 4210000000 vrf ovnvrf100
    address-family ipv4 unicast
      redistribute kernel

  FRR monitors the kernel routing table for vrf100 (table 100) and sees the blackhole route for 192.168.10.0/24. Because of redistribute kernel, FRR adds this route to BGP's Routing Information Base (RIB) for vrf100.

  You can verify:
  ssh ovnlab "sudo podman exec ovn-gw-2 vtysh -c 'show ip bgp vrf ovnvrf100'"

  5. VRF Route Leaking (The Magic Part)

  When you configure in ovnvrf20:
  router bgp 4210000000 vrf ovnvrf20
    address-family ipv4 unicast
      import vrf ovnvrf100

  This tells FRR to import routes from vrf100's BGP RIB into vrf20's BGP RIB. This is called "VRF route leaking" or "route target import."

  Internally, FRR copies the 192.168.10.0/24 route from vrf100's BGP table into vrf20's BGP table.

  You can verify:
  ssh ovnlab "sudo podman exec ovn-gw-2 vtysh -c 'show ip bgp vrf ovnvrf20'"
  # Should now show 192.168.10.0/24 in addition to the NAT routes

  6. BGP Advertisement to ToR

  Once the route is in ovnvrf20's BGP RIB, the existing BGP session advertises it:
  router bgp 4210000000 vrf ovnvrf20
    neighbor ext0-bgp interface remote-as external
    address-family ipv4 unicast
      redistribute kernel
      neighbor ext0-bgp prefix-list no-default out

  The route gets advertised to the ToR peer via the ext0-bgp unnumbered BGP session.


  Key Points

  1. vrf100 has no external BGP peers - it's purely for internal route collection from lr-guest
  2. vrf20 has the actual ToR peering - it's where routes get advertised externally
  3. VRF import bridges the two - routes from vrf100 appear in vrf20's BGP table
  4. Blackhole routes are just markers - they don't route actual traffic, they just signal to FRR "advertise this prefix"
  5. This only works on ovn-gw-2 - because that's where the DGP (lrp-guest-join-dgp) is currently bound

And, of course, we can now ping the VM IP address from the ToR connected to ovn-gw-2:

$ podman exec -it ovn-central-az1 ovn-sbctl list advertised_route | grep 192.168.10 -C3
_uuid               : 1c417fc1-6af6-4945-9aaf-5443237a120a
datapath            : c1a7942f-bf59-4835-8c56-4df45a5c5855
external_ids        : {}
ip_prefix           : "192.168.10.0/24"
logical_port        : 782b2357-2c37-4e6f-bec6-ff204fcb4733
tracked_port        : []

$ podman exec ovn-gw-2 ip netns exec frr-ns ip route
172.16.10.2 nhid 6 via inet6 fe80::20fb:d6ff:fe66:992c dev ext1 proto bgp metric 20
192.168.10.0/24 nhid 6 via inet6 fe80::20fb:d6ff:fe66:992c dev ext1 proto bgp metric 20
192.168.20.0/24 nhid 6 via inet6 fe80::20fb:d6ff:fe66:992c dev ext1 proto bgp metric 20

$ podman exec ovn-gw-2 ip netns exec frr-ns ping -c 2 192.168.10.10
PING 192.168.10.10 (192.168.10.10) 56(84) bytes of data.
64 bytes from 192.168.10.10: icmp_seq=1 ttl=62 time=1.48 ms
64 bytes from 192.168.10.10: icmp_seq=2 ttl=62 time=0.575 ms

--- 192.168.10.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.575/1.027/1.479/0.452 ms


Conclusions

This combination of code analysis + live system inspection + domain knowledge made understanding the feature much faster than traditional code reading alone. Even for someone familiar with OVN, having AI as a “knowledgeable assistant” that can quickly locate and explain specific implementation details is awesome! It’s been very handy for:

  • Rapid ramp-up on the new BGP integration feature as a user
  • Tracing execution paths across multiple source files without manually grepping through thousands of lines
  • Understanding the “how” and “why” of implementation details like the VRF creation
  • Connecting the dots between configuration options and actual kernel-level operations

Claude is definitely getting a permanent route in my learning topology. Next hop: more complex features! 😉

OpenStack TripleO networking layout

The goal of this post is to describe how network isolation is typically achieved for both the control and data planes in OpenStack using TripleO. In particular, how all this happens in a virtual setup, using one baremetal node (hypervisor, from now on) to deploy the OpenStack nodes with libvirt. For the purpose of this post, we’ll work with a 3 controllers + 1 compute virtual setup.

(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------------+--------+------------------------+
| b3bd5157-b3ea-4331-91af-3820c4e12252 | controller-0 | ACTIVE | ctlplane=192.168.24.15 |
| 6f228b08-49a0-4b68-925a-17d06224d5f9 | controller-1 | ACTIVE | ctlplane=192.168.24.37 |
| e5c649b5-c968-4293-a994-04293cb16da1 | controller-2 | ACTIVE | ctlplane=192.168.24.10 |
| 9f15ed23-efb1-4972-b578-7b0da3500053 | compute-0 | ACTIVE | ctlplane=192.168.24.14 |
+--------------------------------------+--------------+--------+------------------------+

The tool used to deploy this setup is Infrared (documentation) which is an easy-to-use wrapper around TripleO. Don’t be scared about the so many layers involved here; the main point is to understand that a physical – and somewhat powerful – server is running an OpenStack cluster formed by:

  • 3 virtual controllers that run the OpenStack control plane services (Neutron, Nova, Glance, …)
  • 1 virtual compute node that will serve to host the workloads (virtual machines) of the OpenStack cluster 

From a Networking perspective (I’ll omit the undercloud for simplicity), things are wired like this:

Let’s take a look at the bridges in the hypervisor node:

[root@hypervisor]# brctl show

bridge name     bridge id               STP enabled     interfaces
management      8000.525400cc1d8b       yes             management-nic
                                                        vnet0
                                                        vnet12
                                                        vnet3
                                                        vnet6
                                                        vnet9

external        8000.5254000ceb7c       yes             external-nic
                                                        vnet11
                                                        vnet14
                                                        vnet2
                                                        vnet5
                                                        vnet8

data            8000.5254007bc90a       yes             data-nic
                                                        vnet1
                                                        vnet10
                                                        vnet13
                                                        vnet4
                                                        vnet7

Each bridge has 6 ports (3 controllers, 1 compute, 1 undercloud, and the local port in the hypervisor). Now, each virtual machine running in this node can be mapped to the right interface:

[root@hypervisor]# for i in controller-0 controller-1 controller-2 compute-0; do virsh domiflist $i; done


 Interface   Type      Source       Model    MAC
----------------------------------------------------------------
 vnet9       network   management   virtio   52:54:00:74:29:4f
 vnet10      network   data         virtio   52:54:00:1c:44:26
 vnet11      network   external     virtio   52:54:00:20:3c:4e

 Interface   Type      Source       Model    MAC
----------------------------------------------------------------
 vnet3       network   management   virtio   52:54:00:0b:ad:3b
 vnet4       network   data         virtio   52:54:00:2f:9f:3e
 vnet5       network   external     virtio   52:54:00:75:a5:ed

 Interface   Type      Source       Model    MAC
----------------------------------------------------------------
 vnet6       network   management   virtio   52:54:00:da:a3:1e
 vnet7       network   data         virtio   52:54:00:57:26:67
 vnet8       network   external     virtio   52:54:00:2c:21:d5

 Interface   Type      Source       Model    MAC
----------------------------------------------------------------
 vnet0       network   management   virtio   52:54:00:de:4a:38
 vnet1       network   data         virtio   52:54:00:c7:74:4b
 vnet2       network   external     virtio   52:54:00:22:de:5c

Network configuration templates

This section will go through the Infrared/TripleO configuration to understand how this layout was defined. This will also help the reader to change the CIDRs, VLANs, number of virtual NICs, etc.

First, the deployment script:

$ cat overcloud_deploy.sh
#!/bin/bash

openstack overcloud deploy \
--timeout 100 \
--templates /usr/share/openstack-tripleo-heat-templates \
--stack overcloud \
--libvirt-type kvm \
-e /home/stack/virt/config_lvm.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e ~/containers-prepare-parameter.yaml \
-e /home/stack/virt/docker-images.yaml \
--log-file overcloud_deployment_99.log

Now, let’s take a look at the network related templates to understand the different networks and how they map to the physical NICs inside the controllers/compute nodes:

$ grep -i -e cidr -e vlan /home/stack/virt/network/network-environment.yaml
ControlPlaneSubnetCidr: '192.168.24.0/24'

ExternalNetCidr: 10.0.0.0/24
ExternalNetworkVlanID: 10

InternalApiNetCidr: 172.17.1.0/24
InternalApiNetworkVlanID: 20

StorageMgmtNetCidr: 172.17.4.0/24
StorageMgmtNetworkVlanID: 40

StorageNetCidr: 172.17.3.0/24
StorageNetworkVlanID: 30

TenantNetCidr: 172.17.2.0/24
TenantNetworkVlanID: 50

NeutronNetworkVLANRanges: tenant:1000:2000

OS::TripleO::Compute::Net::SoftwareConfig: three-nics-vlans/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig: three-nics-vlans/controller.yaml

In the output above you can see 6 different networks:

  • ControlPlane (flat): used mainly for provisioning (PXE) and remote access to the nodes via SSH.
  • External (VLAN 10): external network used for dataplane floating IP traffic and access to the OpenStack API services via their external endpoints.
  • InternalApi (VLAN 20): network where the OpenStack control plane services will listen for internal communication (eg. Neutron <-> Nova).
  • StorageMgmt (VLAN 40): network used to manage the storage (in this deployment, swift-object-server, swift-container-server, and swift-account-server will listen to requests on this network)   
  • Storage (VLAN 30): network used for access to the Object storage (in this deployment, swift-proxy will listen to requests on this network).
  • Tenant: this network will carry the overlay tunnelled traffic (Geneve for OVN, VXLAN in the case of ML2/OVS) in the VLAN 50 but will also carry dataplane traffic if VLAN tenant networks are used in Neutron. The VLAN range allowed for such traffic is specified also in the template (in the example, VLAN ids ranging from 1000-2000 are reserved for Neutron tenant networks).

The way that each NIC is mapped to each network is defined in the yaml files below. For this deployment, I used a customized layout via this patch (controller.yaml and compute.yaml). Essentially, the mapping looks like this:

  • Controllers:
    • nic1: ControlPlaneIp (flat); InternalApi (20), Storage (30) , StorageMgmt (40), VLAN devices
    • nic2: br-tenant OVS bridge and VLAN50 for the tunnelled traffic
    • nic3: br-ex OVS bridge for external traffic 
  • Compute:
    • nic1: ControlPlaneIp (flat); InternalApi (20), Storage (30), VLAN devices 
    • nic2: br-tenant OVS bridge and VLAN50 for the tunnelled traffic
    • nic3: br-ex OVS bridge for external traffic 

The nodes map nic1, nic2, nic3 to ens4, ens5, ens6 respectively:

[root@controller-0 ~]# ip l | egrep "vlan[2-4]0"
9: vlan20@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
10: vlan30@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
11: vlan40@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

[root@controller-0 ~]# ovs-vsctl list-ports br-tenant
ens4
vlan50

[root@controller-0 ~]# ovs-vsctl list-ports br-ex
ens5

In the controller nodes we’ll find an haproxy instance load balancing the requests to the different nodes and we can see here the network layout as well:

[root@controller-1 ~]# podman exec -uroot -it haproxy-bundle-podman-1 cat /etc/haproxy/haproxy.cfg

listen neutron
  bind 10.0.0.122:9696 transparent      <--- External network
  bind 172.17.1.48:9696 transparent     <--- InternalApi network
  mode http
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  option httpchk
  option httplog
# Now the backends in the InternalApi network
  server controller-0.internalapi.local 172.17.1.72:9696 check fall 5 inter 2000 rise 2
  server controller-1.internalapi.local 172.17.1.101:9696 check fall 5 inter 2000 rise 2
  server controller-2.internalapi.local 172.17.1.115:9696 check fall 5 inter 2000 rise 2

In the above output, the IP address 172.17.1.48 is a virtual IP managed by pacemaker and will live in the InternalApi (VLAN 20) network where it is master:

[root@controller-1 ~]# pcs status | grep 172.17.1.48
  * ip-172.17.1.48      (ocf::heartbeat:IPaddr2):       Started controller-0

[root@controller-0 ~]# ip a |grep 172.17.1.48
    inet 172.17.1.48/32 brd 172.17.1.255 scope global vlan20

Traffic inspection

With a clear view on the networking layout, now we can use the hypervisor to hook a tcpdump in the right bridge and check for whatever traffic we’re interested in.

Let’s for example ping from the InternalApi (172.17.1.0/24) network on controller-0 to controller-1 and check the traffic in the hypervisor:

[heat-admin@controller-0 ~]$ ping controller-1.internalapi.local
PING controller-1.internalapi.redhat.local (172.17.1.101) 56(84) bytes of data.
64 bytes from controller-1.redhat.local (172.17.1.101): icmp_seq=1 ttl=64 time=0.213 ms
64 bytes from controller-1.redhat.local (172.17.1.101): icmp_seq=2 ttl=64 time=0.096 ms


[root@hypervisor]# tcpdump -i management -vvne icmp -c2
tcpdump: listening on management, link-type EN10MB (Ethernet), capture size 262144 bytes
15:19:08.418046 52:54:00:74:29:4f > 52:54:00:0b:ad:3b, ethertype 802.1Q (0x8100), length 102: vlan 20, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 58494, offset 0, flags [DF], proto ICMP (1), length 84)
172.17.1.72 > 172.17.1.101: ICMP echo request, id 53086, seq 5, length 64 15:19:08.418155 52:54:00:0b:ad:3b > 52:54:00:74:29:4f, ethertype 802.1Q (0x8100), length 102: vlan 20, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 39897, offset 0, flags [none], proto ICMP (1), length 84) 172.17.1.101 > 172.17.1.72: ICMP echo reply, id 53086, seq 5, length 64 [root@hypervisor]# brctl showmacs management | egrep "52:54:00:0b:ad:3b|52:54:00:74:29:4f" port no mac addr is local? ageing timer 3 52:54:00:0b:ad:3b no 0.01 5 52:54:00:74:29:4f no 0.01

When we ping to the controller-1 IP address of the InternalApi network, the traffic is tagged (VLAN 20) and going through the management bridge in the hypervisor. This matches our expectations as we defined such network in the template files that way.

Similarly, we could trace more complicated scenarios like an OpenStack instance in a tenant network pinging an external destination:

(overcloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+---------+--------+-----------------------+--------+
| ID | Name | Status | Networks | Image |
+--------------------------------------+---------+--------+-----------------------+--------+
| 3d9f6957-5311-4590-8c62-097b576ffa04 | cirros1 | ACTIVE | private=192.168.0.166 | cirros |
+--------------------------------------+---------+--------+-----------------------+--------+
[root@compute-0 ~]# sudo ip net e ovnmeta-e49cc182-247c-4dc9-9589-4df6fcb09511 ssh cirros@192.168.0.166 cirros@192.168.0.166's password: $ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=53 time=10.356 ms 64 bytes from 8.8.8.8: seq=1 ttl=53 time=8.591 ms

Now in the hypervisor, we’ll trace the Geneve traffic (VLAN50):

# tcpdump -i data -vvnne vlan 50 and "(udp port 6081) and (udp[10:2] = 0x6558) and (udp[(8 + (4 * (2 + (udp[8:1] & 0x3f))) + 12):2] = 0x0800) and (udp[8 + (4 * (2 + (udp[8:1] & 0x3f))) + 14 + 9:1] = 01)"  -c2

tcpdump: listening on data, link-type EN10MB (Ethernet), capture size 262144 bytes
16:21:28.642671 6a:9b:72:22:3f:68 > 0e:d0:eb:00:1b:e7, ethertype 802.1Q (0x8100), length 160: vlan 50, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 15872, offset 0, flags [DF], proto UDP (17), length 142) 172.17.2.119.27073 > 172.17.2.143.6081: [bad udp cksum 0x5db4 -> 0x1e8c!] Geneve, Flags [C], vni 0x5, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00010003] fa:16:3e:a7:95:87 > 52:54:00:0c:eb:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 50335, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.0.166 > 8.8.8.8: ICMP echo request, id 2818, seq 2145, length 64 16:21:28.650412 0e:d0:eb:00:1b:e7 > 6a:9b:72:22:3f:68, ethertype 802.1Q (0x8100), length 160: vlan 50, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 26871, offset 0, flags [DF], proto UDP (17), length 142) 172.17.2.143.31003 > 172.17.2.119.6081: [bad udp cksum 0x5db4 -> 0x4a04!] Geneve, Flags [C], vni 0x3, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00040002] fa:16:3e:34:a2:0e > fa:16:3e:63:c0:7a, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 53, id 0, offset 0, flags [none], proto ICMP (1), length 84) 8.8.8.8 > 192.168.0.166: ICMP echo reply, id 2818, seq 2145, length 64

(First, sorry for the complicated filter; I picked it up from here and adapted it to match on the inner protocol of the Geneve traffic against ICMP. If there’s an easier way please tell me :p)

We can see that the Geneve traffic goes between 6a:9b:72:22:3f:68 and 0e:d0:eb:00:1b:e7 and now we can determine the source/dest nodes:

[root@hypervisor]# brctl showmacs data
  2     6a:9b:72:22:3f:68       no                 0.32
  2     fe:54:00:c7:74:4b       yes                0.00
  2     fe:54:00:c7:74:4b       yes                0.00
  3     0e:d0:eb:00:1b:e7       no                 0.40
  3     fe:54:00:2f:9f:3e       yes                0.00
  3     fe:54:00:2f:9f:3e       yes                0.00

From the info above we can see that port 2 corresponds to the MAC ending in “74:4b” and port 3 corresponds to the MAC ending in “9f:3e“. Therefore, this Geneve traffic is flowing from the compute-0 node to the controller-1 node which is where Neutron is running the gateway to do the SNAT towards the external network. Now, this last portion can be examined in the external bridge:

[root@hypervisor]# tcpdump -i external icmp -vvnnee -c2
tcpdump: listening on external, link-type EN10MB (Ethernet), capture size 262144 bytes
16:33:35.016198 fa:16:3e:a7:95:87 > 52:54:00:0c:eb:7c, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 13537, offset 0, flags [DF], proto ICMP (1), length 84) 10.0.0.225 > 8.8.8.8: ICMP echo request, id 4354, seq 556, length 64 16:33:35.023570 52:54:00:0c:eb:7c > fa:16:3e:a7:95:87, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 54, id 0, offset 0, flags [none], proto ICMP (1), length 84) 8.8.8.8 > 10.0.0.225: ICMP echo reply, id 4354, seq 556, length 64

In case that you’re wondering what’s 10.0.0.225; that’s the IP address of the Neutron gateway:

(overcloud) [stack@undercloud-0 ~]$ openstack router show router1 | grep gateway
| external_gateway_info   | {"network_id": "fe8330fe-540a-4acf-bda8-394398fb4272", "external_fixed_ips": [{"subnet_id": "e388a080-1953-4cdd-9e35-48d416fe2ae1", "ip_address": "10.0.0.225"}

Similarly, the MAC addresses can be matched to confirm that the traffic goes from the gateway node (controller-1), as the MAC ending in “a5:ed”  – in the same port as the source MAC from the ICMP packet – corresponds to the NIC attached to the external network on the controller-1.

[root@hypervisor]# brctl showmacs external
  3     fa:16:3e:a7:95:87       no                 0.47
  3     fe:54:00:75:a5:ed       yes                0.00
  3     fe:54:00:75:a5:ed       yes                0.00

Reflection

This is a virtual setup and everything is confined to the boundaries of a physical server. However, it is a great playground to get yourself familiar with the underlay networking of an OpenStack setup (and networking in general ;). Once you get your hands on a real production environment, all these Linux bridges will be replaced by ToR switches (or even routers on a pure L3 Spine & Leaf architecture) but the fundamentals are the same.

OVN Cluster Interconnection

A new feature has been recently introduced in OVN that allows multiple clusters to be interconnected at L3 level (here’s a link to the series of patches). This can be useful for scenarios with multiple availability zones (or physical regions) or simply to allow better scaling by having independent control planes yet allowing connectivity between workloads in separate zones.

Simplifying things, logical routers on each cluster can be connected via transit overlay networks. The interconnection layer is responsible for creating the transit switches in the IC database that will become visible to the connected clusters. Each cluster can then connect their logical routers to the transit switches. More information can be found in the ovn-architecture manpage.

I created a vagrant setup to test it out and become a bit familiar with it. All you need to do to recreate it is cloning and running ‘vagrant up‘ inside the ovn-interconnection folder:

https://github.com/danalsan/vagrants/tree/master/ovn-interconnection

This will deploy 7 CentOS machines (300MB of RAM each) with two separate OVN clusters (west & east) and the interconnection services. The layout is described in the image below:

Once the services are up and running, a few resources will be created on each cluster and the interconnection services will be configured with a transit switch between them:

Let’s see, for example, the logical topology of the east availability zone, where the transit switch ts1 is listed along with the port in the west remote zone:

[root@central-east ~]# ovn-nbctl show
switch c850599c-263c-431b-b67f-13f4eab7a2d1 (ts1)
    port lsp-ts1-router_west
        type: remote
        addresses: ["aa:aa:aa:aa:aa:02 169.254.100.2/24"]
    port lsp-ts1-router_east
        type: router
        router-port: lrp-router_east-ts1
switch 8361d0e1-b23e-40a6-bd78-ea79b5717d7b (net_east)
    port net_east-router_east
        type: router
        router-port: router_east-net_east
    port vm1
        addresses: ["40:44:00:00:00:01 192.168.1.11"]
router b27d180d-669c-4ca8-ac95-82a822da2730 (router_east)
    port lrp-router_east-ts1
        mac: "aa:aa:aa:aa:aa:01"
        networks: ["169.254.100.1/24"]
        gateway chassis: [gw_east]
    port router_east-net_east
        mac: "40:44:00:00:00:04"
        networks: ["192.168.1.1/24"]

As for the Southbound database, we can see the gateway port for each router. In this setup I only have one gateway node but, as any other distributed gateway port in OVN, it could be scheduled in multiple nodes providing HA

[root@central-east ~]# ovn-sbctl show
Chassis worker_east
    hostname: worker-east
    Encap geneve
        ip: "192.168.50.100"
        options: {csum="true"}
    Port_Binding vm1
Chassis gw_east
    hostname: gw-east
    Encap geneve
        ip: "192.168.50.102"
        options: {csum="true"}
    Port_Binding cr-lrp-router_east-ts1
Chassis gw_west
    hostname: gw-west
    Encap geneve
        ip: "192.168.50.103"
        options: {csum="true"}
    Port_Binding lsp-ts1-router_west

If we query the interconnection databases, we will see the transit switch in the NB and the gateway ports in each zone:

[root@central-ic ~]# ovn-ic-nbctl show
Transit_Switch ts1

[root@central-ic ~]# ovn-ic-sbctl show
availability-zone east
    gateway gw_east
        hostname: gw-east
        type: geneve
            ip: 192.168.50.102
        port lsp-ts1-router_east
            transit switch: ts1
            address: ["aa:aa:aa:aa:aa:01 169.254.100.1/24"]
availability-zone west
    gateway gw_west
        hostname: gw-west
        type: geneve
            ip: 192.168.50.103
        port lsp-ts1-router_west
            transit switch: ts1
            address: ["aa:aa:aa:aa:aa:02 169.254.100.2/24"]

With this topology, traffic flowing from vm1 to vm2 shall flow from gw-east to gw-west through a Geneve tunnel. If we list the ports in each gateway we should be able to see the tunnel ports. Needless to say, gateways have to be mutually reachable so that the transit overlay network can be established:

[root@gw-west ~]# ovs-vsctl show
6386b867-a3c2-4888-8709-dacd6e2a7ea5
    Bridge br-int
        fail_mode: secure
        Port ovn-gw_eas-0
            Interface ovn-gw_eas-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="192.168.50.102"}

Now, when vm1 pings vm2, the traffic flow should be like:

(vm1) worker_east ==== gw_east ==== gw_west ==== worker_west (vm2).

Let’s see it via ovn-trace tool:

[root@central-east vagrant]# ovn-trace  --ovs --friendly-names --ct=new net_east  'inport == "vm1" && eth.src == 40:44:00:00:00:01 && eth.dst == 40:44:00:00:00:04 && ip4.src == 192.168.1.11 && ip4.dst == 192.168.2.12 && ip.ttl == 64 && icmp4.type == 8'


ingress(dp="net_east", inport="vm1")
...
egress(dp="net_east", inport="vm1", outport="net_east-router_east")
...
ingress(dp="router_east", inport="router_east-net_east")
...
egress(dp="router_east", inport="router_east-net_east", outport="lrp-router_east-ts1")
...
ingress(dp="ts1", inport="lsp-ts1-router_east")
...
egress(dp="ts1", inport="lsp-ts1-router_east", outport="lsp-ts1-router_west")
 9. ls_out_port_sec_l2 (ovn-northd.c:4543): outport == "lsp-ts1-router_west", priority 50, uuid c354da11
    output;
    /* output to "lsp-ts1-router_west", type "remote" */

Now let’s capture Geneve traffic on both gateways while a ping between both VMs is running:

[root@gw-east ~]# tcpdump -i genev_sys_6081 -vvnee icmp
tcpdump: listening on genev_sys_6081, link-type EN10MB (Ethernet), capture size 262144 bytes
10:43:35.355772 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 11379, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 40, length 64
10:43:35.356077 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 11379, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 40, length 64
10:43:35.356442 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 42610, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 40, length 64
10:43:35.356734 40:44:00:00:00:04 > 40:44:00:00:00:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 42610, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 40, length 64


[root@gw-west ~]# tcpdump -i genev_sys_6081 -vvnee icmp
tcpdump: listening on genev_sys_6081, link-type EN10MB (Ethernet), capture size 262144 bytes
10:43:29.169532 aa:aa:aa:aa:aa:01 > aa:aa:aa:aa:aa:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 8875, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 34, length 64
10:43:29.170058 40:44:00:00:00:10 > 40:44:00:00:00:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 62, id 8875, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.1.11 > 192.168.2.12: ICMP echo request, id 5494, seq 34, length 64
10:43:29.170308 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 38667, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 34, length 64
10:43:29.170476 aa:aa:aa:aa:aa:02 > aa:aa:aa:aa:aa:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 63, id 38667, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.2.12 > 192.168.1.11: ICMP echo reply, id 5494, seq 34, length 64

You can observe that the ICMP traffic flows between the transit switch ports (aa:aa:aa:aa:aa:02 <> aa:aa:aa:aa:aa:01) traversing both zones.

Also, as the packet has gone through two routers (router_east and router_west), the TTL at the destination has been decremented twice (from 64 to 62):

[root@worker-west ~]# ip net e vm2 tcpdump -i any icmp -vvne
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:49:32.491674  In 40:44:00:00:00:10 ethertype IPv4 (0x0800), length 100: (tos 0x0, ttl 62, id 57504, offset 0, flags [DF], proto ICMP (1), length 84)

This is a really great feature that opens a lot of possibilities for cluster interconnection and scaling. However, it has to be taken into account that it requires another layer of management that handles isolation (multitenancy) and avoids IP overlapping across the connected availability zones.

OVN – Geneve Encapsulation

In the last post we created a Logical Switch with two ports residing on different hypervisors. Communication between those two ports took place over the tunnel interface using Geneve encapsulation. Let’s now take a closer look at this overlay traffic.

Without diving too much into the packet processing in OVN, we need to know that each Logical Datapath (Logical Switch / Logical Router) has an ingress and an egress pipeline. Whenever a packet comes in, the ingress pipeline is executed and after the output action, the egress pipeline will run to deliver the packet to its destination. More info here: http://docs.openvswitch.org/en/latest/faq/ovn/#ovn

In our scenario, when we ping from VM1 to VM2, the ingress pipeline of each ICMP packet runs on Worker1 (where VM1 is bound to) and the packet is pushed to the tunnel interface to Worker2 (where VM2 resides). When Worker2 receives the packet on its physical interface, the egress pipeline of the Logical Switch (network1) is executed to deliver the packet to VM2. But … How does OVN know where the packet comes from and which Logical Datapath should process it? This is where the metadata in the Geneve headers comes in.

Let’s get back to our setup and ping from VM1 to VM2 and capture traffic on the physical interface (eth1) of Worker2:

[root@worker2 ~]# sudo tcpdump -i eth1 -vvvnnexx

17:02:13.403229 52:54:00:13:e0:a2 > 52:54:00:ac:67:5b, ethertype IPv4 (0x0800), length 156: (tos 0x0, ttl 64, id 63920, offset 0, flags [DF], proto UDP (17), length 142)
    192.168.50.100.7549 > 192.168.50.101.6081: [bad udp cksum 0xe6a5 -> 0x7177!] Geneve, Flags [C], vni 0x1, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00010002]
        40:44:00:00:00:01 > 40:44:00:00:00:02, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 41968, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.0.11 > 192.168.0.12: ICMP echo request, id 1251, seq 6897, length 64
        0x0000:  5254 00ac 675b 5254 0013 e0a2 0800 4500
        0x0010:  008e f9b0 4000 4011 5a94 c0a8 3264 c0a8
        0x0020:  3265 1d7d 17c1 007a e6a5 0240 6558 0000
        0x0030:  0100 0102 8001 0001 0002 4044 0000 0002
        0x0040:  4044 0000 0001 0800 4500 0054 a3f0 4000
        0x0050:  4001 1551 c0a8 000b c0a8 000c 0800 c67b
        0x0060:  04e3 1af1 94d9 6e5c 0000 0000 41a7 0e00
        0x0070:  0000 0000 1011 1213 1415 1617 1819 1a1b
        0x0080:  1c1d 1e1f 2021 2223 2425 2627 2829 2a2b
        0x0090:  2c2d 2e2f 3031 3233 3435 3637

17:02:13.403268 52:54:00:ac:67:5b > 52:54:00:13:e0:a2, ethertype IPv4 (0x0800), length 156: (tos 0x0, ttl 64, id 46181, offset 0, flags [DF], proto UDP (17), length 142)
    192.168.50.101.9683 > 192.168.50.100.6081: [bad udp cksum 0xe6a5 -> 0x6921!] Geneve, Flags [C], vni 0x1, proto TEB (0x6558), options [class Open Virtual Networking (OVN) (0x102) type 0x80(C) len 8 data 00020001]
        40:44:00:00:00:02 > 40:44:00:00:00:01, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 16422, offset 0, flags [none], proto ICMP (1), length 84)
    192.168.0.12 > 192.168.0.11: ICMP echo reply, id 1251, seq 6897, length 64
        0x0000:  5254 0013 e0a2 5254 00ac 675b 0800 4500
        0x0010:  008e b465 4000 4011 9fdf c0a8 3265 c0a8
        0x0020:  3264 25d3 17c1 007a e6a5 0240 6558 0000
        0x0030:  0100 0102 8001 0002 0001 4044 0000 0001
        0x0040:  4044 0000 0002 0800 4500 0054 4026 0000
        0x0050:  4001 b91b c0a8 000c c0a8 000b 0000 ce7b
        0x0060:  04e3 1af1 94d9 6e5c 0000 0000 41a7 0e00
        0x0070:  0000 0000 1011 1213 1415 1617 1819 1a1b
        0x0080:  1c1d 1e1f 2021 2223 2425 2627 2829 2a2b
        0x0090:  2c2d 2e2f 3031 3233 3435 3637

Let’s now decode the ICMP request packet (I’m using this tool):

ICMP request inside the Geneve tunnel

Metadata

 

In the ovn-architecture(7) document, you can check how the Metadata is used in OVN in the Tunnel Encapsulations section. In short, OVN encodes the following information in the Geneve packets:

  • Logical Datapath (switch/router) identifier (24 bits) – Geneve VNI
  • Ingress and Egress port identifiers – Option with class 0x0102 and type 0x80 with 32 bits of data:
         1       15          16
       +---+------------+-----------+
       |rsv|ingress port|egress port|
       +---+------------+-----------+
         0

Back to our example: VNI = 0x000001 and Option Data = 00010002, so from the above:

Logical Datapath = 1   Ingress Port = 1   Egress Port = 2

Let’s take a look at SB database contents to see if they match what we expect:

[root@central ~]# ovn-sbctl get Datapath_Binding network1 tunnel-key
1

[root@central ~]# ovn-sbctl get Port_Binding vm1 tunnel-key
1

[root@central ~]# ovn-sbctl get Port_Binding vm2 tunnel-key
2

We can see that the Logical Datapath belongs to network1, that the ingress port is vm1 and that the output port is vm2 which makes sense as we’re analyzing the ICMP request from VM1 to VM2. 

By the time this packet hits Worker2 hypervisor, OVN has all the information to process the packet on the right pipeline and deliver the port to VM2 without having to run the ingress pipeline again.

What if we don’t use any encapsulation?

This is technically possible in OVN and there’s such scenarios like in the case where we’re managing a physical network directly and won’t use any kind of overlay technology. In this case, our ICMP request packet would’ve been pushed directly to the network and when Worker2 receives the packet, OVN needs to figure out (based on the IP/MAC addresses) which ingress pipeline to execute (twice, as it was also executed by Worker1) before it can go to the egress pipeline and deliver the packet to VM2.