[ovs-dev] [PATCH v1 ovn 0/1] Forwarding group to load balance l2 traffic with liveness detection

Numan Siddique numans at ovn.org
Thu Jan 9 07:13:52 UTC 2020


On Tue, Jan 7, 2020 at 3:55 AM Manoj Sharma <manoj.sharma at nutanix.com> wrote:
>
> A forwarding group is an aggregation of logical switch ports of a
> logical switch to load balance traffic across the ports. It also
> detects the liveness if the logical switch ports are realized as
> OVN tunnel ports on the physical topology.
>
> In the below logical topology diagram, the logical switch has two ports
> connected to chassis / external routers R1 and R2. The logical router needs
> to send traffic to an external network that is connected through R1 and R2.
>
>                                                     +----+
>                                          +----------+ R1 |    *****
>                                         /           +----+  **     **
>   +----------+        +--------------+ / lsp1              *         *
>   | Logical  |        |   Logical    |/                   * External  *
>   | Router   +--------+   switch     X                    *  Network  *
>   |          |        |              |\                   *           *
>   +----------+        +--------------+ \ lsp2              *         *
>                              ^          \           +----+  **     **
>                              |           +----------+ R2 |    *****
>                              |                      +----+
>                    fwd_group -> (lsp1, lsp2)
>
> In the absence of forwarding group, the logical router will have unicast
> route to point to either R1 or R2. In case of R1 or R2 going down, it will
> require control plane's intervention to update the route to point to proper
> nexthop.
>
> With forwarding group, a virtual IP (VIP) and virtual MAC (VMAC) address
> are configured on the forwarding group. The logical router points to the
> forwarding group's VIP as the nexthop for hosts behind R1 and R2.
>
> [root at fwd-group]# ovn-nbctl fwd-group-add fwd ls1 VIP_1 VMAC_1 lsp1 lsp2
>
> [root at fwd-group]# ovn-nbctl fwd-group-list
> UUID    FWD_GROUP      VIP        VMAC       CHILD_PORTS
> UUID_1    fwd         VIP_1      VMAC_1       lsp1 lsp2
>
> [root at fwd-group]# ovn-nbctl lr-route-list lr1
> IPv4 Routes
> external_host_prefix/prefix_len            VIP_1 dst-ip
>
> The logical switch will install an ARP responder rule to reply with VMAC
> as the MAC address for ARP requests for VIP. It will also install a MAC
> lookup rule for VMAC with action to load balance across the logical switch
> ports of the forwarding group.
>
> Datapath: "ls1" Pipeline: ingress
> table=10(ls_in_arp_rsp      ), priority=50   , match=(arp.tpa == VIP_1 &&
>     arp.op == 1), action=(eth.dst = eth.src; eth.src = VMAC_1; arp.op = 2;
>     /* ARP reply */ arp.tha = arp.sha; arp.sha = VMAC_1; arp.tpa = arp.spa;
>     arp.spa = VIP; outport = inport; flags.loopback = 1; output;)
>
> table=13(ls_in_l2_lkup      ), priority=50   , match=(eth.dst == VMAC_1),
>     action=(fwd_group("lsp1","lsp2");)
>
> In the physical topology, OVN managed hypervisors are connected to R1 and
> R2 through overlay tunnels. The logical flow's "fwd_group" action mentioned
> above, gets translated to openflow group type "select" with one bucket for
> each logical switch port.
>
> cookie=0x0, duration=16.869s, table=29, n_packets=4, n_bytes=392, idle_age=0,
> priority=111,metadata=0x9,dl_dst=VMAC_1 actions=group:1
>
> group_id=1,type=select,selection_method=dp_hash,
>     bucket=actions=load:0x2->NXM_NX_REG15[0..15], resubmit(,32),
>     bucket=actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)
>
> where 0x2 and 0x3 are port tunnel keys of lsp1 and lsp2.
>
> The openflow group type "select" with selection method "dp_hash" load
> balances traffic based on source and destination Ethernet address, VLAN ID,
> Ethernet type, IPv4/v6 source and destination address and protocol, and for
> TCP and SCTP only, the source and destination ports.
>
> To detect path failure between OVN managed hypervisors and (R1, R2), BFD is
> enabled on the tunnel interfaces. The openflow group is modified to include
> watch_port for liveness detection of a port. To enable liveness, --liveness
> option must be specified while configuring the forwarding group.
>
> group_id=1,type=select,selection_method=dp_hash,
>   bucket=watch_port:31,actions=load:0x2->NXM_NX_REG15[0..15],resubmit(,32),
>   bucket=watch_port:32,actions=load:0x3->NXM_NX_REG15[0..15],resubmit(,32)
>
> Where 31 and 32 are ovs port numbers for the tunnel interfaces connecting
> to R1 and R2.
>
> If the BFD forwarding status is down for any of the tunnels, the
> corresponding bucket will not be selected for packet forwarding.
>
> Signed-off-by: Manoj Sharma <manoj.sharma at nutanix.com>


Hi Manoj,

Thanks for the patch. I didn't look into the complete patch. I have
initial few comments.

The patch fails to compile when --enable-Werror and --enable-sparse
are enabled during
configuration.

****
 -I ../ovn -I ./include -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs/include -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs/_gcc/include -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs/lib -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs/_gcc/lib -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs -I
/home/nusiddiq/workspace_cpp/openvswitch/ovs/_gcc -I ../lib -I ./lib
 -Wstrict-prototypes -Wall -Wextra -Wno-sign-compare -Wpointer-arith
-Wformat -Wformat-security -Wswitch-enum -Wunused-parameter
-Wbad-function-cast -Wcast-align -Wstrict-prototypes
-Wold-style-definition -Wmissing-prototypes
-Wmissing-field-initializers -fno-strict-aliasing -Wswitch-bool
-Wlogical-not-parentheses -Wsizeof-array-argument -Wbool-compare
-Wshift-negative-value -Wduplicated-cond -Wshadow
-Wmultistatement-macros -Wcast-align=strict -Werror -Werror  -g -O2
-MT lib/ovn-util.lo -MD -MP -MF $depbase.Tpo -c -o lib/ovn-util.lo
../lib/ovn-util.c &&\
mv -f $depbase.Tpo $depbase.Plo
../utilities/ovn-nbctl.c:752:55: error: string too long (8208 bytes,
8191 bytes max)

*****

Can you please include the description in Patch 0 in the actual patch
commit message ?
The commit message in the patch doesn't have much details.

In your above description, does the logical switch (with logical ports
lsp1 and lsp2) has a localnet port ?

I am confused how the packet goes out of the logical switch to the
external network ?
Does lsp1 and lsp2 bound to any VM/VIF ?

>From the test case you have written I see that lsp21 and lsp22 are
part of forward group and they
have VIF ports as well. So the packet destined to the VIP will be
delivered to one of the VIF ?
And the VIF/VM takes care of sending the traffic to external routers ?

Thanks
Numan

>
>  controller/lflow.c    |  20 ++++
>  controller/physical.c |  13 +++
>  controller/physical.h |   4 +
>  include/ovn/actions.h |  19 +++-
>  lib/actions.c         | 122 ++++++++++++++++++++++++
>  northd/ovn-northd.c   |  63 +++++++++++++
>  ovn-nb.ovsschema      |  18 +++-
>  ovn-nb.xml            |  35 +++++++
>  tests/ovn-nbctl.at    |  37 ++++++++
>  tests/ovn.at          | 124 ++++++++++++++++++++++++
>  utilities/ovn-nbctl.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  utilities/ovn-trace.c |   3 +
>  12 files changed, 709 insertions(+), 3 deletions(-)
>
> --
> 1.8.3.1
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>


More information about the dev mailing list