[ovs-dev] [PATCH ovn 3/3] northd: Refactor Logical Flows for Gateway Router with DNAT/Load Balancers

Han Zhou hzhou at ovn.org
Wed Jun 23 02:00:01 UTC 2021


On Sat, Jun 19, 2021 at 2:52 AM Mark Gray <mark.d.gray at redhat.com> wrote:
>
> This patch addresses a number of interconnected issues with Gateway
Routers
> that have Load Balancing enabled:
>
> 1) In the router pipeline, we have the following stages to handle
> dnat and unsnat.
>
>  - Stage 4 : lr_in_defrag (dnat zone)
>  - Stage 5 : lr_in_unsnat (snat zone)
>  - Stage 6 : lr_in_dnat   (dnat zone)
>
> In the reply direction, the order of traversal of the tables
> "lr_in_defrag", "lr_in_unsnat" and "lr_in_dnat" adds incorrect
> datapath flows that check ct_state in the wrong conntrack zone.
> This is illustrated below where reply trafic enters the physical host
> port (6) and traverses DNAT zone (14), SNAT zone (default), back to the
> DNAT zone and then on to Logical Switch Port zone (22). The third
> flow is incorrectly checking the state from the SNAT zone instead
> of the DNAT zone.
>
> recirc_id(0),in_port(6),ct_state(-new-est-rel-rpl-trk)
actions:ct_clear,ct(zone=14),recirc(0xf)
> recirc_id(0xf),in_port(6) actions:ct(nat),recirc(0x10)
> recirc_id(0x10),in_port(6),ct_state(-new+est+trk)
actions:ct(zone=14,nat),recirc(0x11)
> recirc_id(0x11),in_port(6),ct_state(+new-est-rel-rpl+trk) actions:
ct(zone=22,nat),recirc(0x12)
> recirc_id(0x12),in_port(6),ct_state(-new+est-rel+rpl+trk) actions:5
>
> Update the order of these tables to resolve this.
>
> 2) Efficiencies can be gained by using the ct_dnat action in the
> table "lr_in_defrag" instead of ct_next. This removes the need for the
> ct_dnat action for established Load Balancer flows avoiding a
> recirculation.
>
> 3) On a Gateway router with DNAT flows configured, the router will
translate
> the destination IP address from (A) to (B). Reply packets from (B) are
> correctly UNDNATed in the reverse direction.
>
> However, if a new connection is established from (B), this flow is never
> committed to conntrack and, as such, is never established. This will
> cause OVS datapath flows to be added that match on the ct.new flag.
>
> For software-only datapaths this is not a problem. However, for
> datapaths that offload these flows to hardware, this may be problematic
> as some devices are unable to offload flows that match on ct.new.
>
> This patch resolves this by committing these flows to the DNAT zone in
> the new "lr_out_post_undnat" stage. Although this could be done in the
> DNAT zone, by doing this in the new zone we can avoid a recirculation.
>
> Co-authored-by: Numan Siddique <numans at ovn.org>
> Signed-off-by: Mark Gray <mark.d.gray at redhat.com>
> Signed-off-by: Numan Siddique <numans at ovn.org>

Thanks Mark and Numan. Please see some non-critical comments inlined.

> ---
>  northd/ovn-northd.8.xml | 143 +++++----
>  northd/ovn-northd.c     | 111 +++++--
>  northd/ovn_northd.dl    | 113 +++++--
>  tests/ovn-northd.at     | 674 +++++++++++++++++++++++++++++++++++-----
>  tests/ovn.at            |   6 +-
>  tests/system-ovn.at     |  13 +-
>  6 files changed, 853 insertions(+), 207 deletions(-)
>
> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> index 4074646029b4..d56a121d4d2e 100644
> --- a/northd/ovn-northd.8.xml
> +++ b/northd/ovn-northd.8.xml
> @@ -2628,39 +2628,9 @@ icmp6 {
>        </li>
>      </ul>
>
> -    <h3>Ingress Table 4: DEFRAG</h3>
>
> -    <p>
> -      This is to send packets to connection tracker for tracking and
> -      defragmentation.  It contains a priority-0 flow that simply moves
traffic
> -      to the next table.
> -    </p>
> -
> -    <p>
> -      If load balancing rules with virtual IP addresses (and ports) are
> -      configured in <code>OVN_Northbound</code> database for a Gateway
router,
> -      a priority-100 flow is added for each configured virtual IP address
> -      <var>VIP</var>. For IPv4 <var>VIPs</var> the flow matches <code>ip
> -      && ip4.dst == <var>VIP</var></code>.  For IPv6
<var>VIPs</var>,
> -      the flow matches <code>ip && ip6.dst ==
<var>VIP</var></code>.
> -      The flow uses the action <code>ct_next;</code> to send IP packets
to the
> -      connection tracker for packet de-fragmentation and tracking before
> -      sending it to the next table.
> -    </p>
> -
> -    <p>
> -      If ECMP routes with symmetric reply are configured in the
> -      <code>OVN_Northbound</code> database for a gateway router, a
priority-300
> -      flow is added for each router port on which symmetric replies are
> -      configured. The matching logic for these ports essentially
reverses the
> -      configured logic of the ECMP route. So for instance, a route with a
> -      destination routing policy will instead match if the source IP
address
> -      matches the static route's prefix. The flow uses the action
> -      <code>ct_next</code> to send IP packets to the connection tracker
for
> -      packet de-fragmentation and tracking before sending it to the next
table.
> -    </p>
>
> -    <h3>Ingress Table 5: UNSNAT</h3>
> +    <h3>Ingress Table 4: UNSNAT</h3>
>
>      <p>
>        This is for already established connections' reverse traffic.
> @@ -2669,7 +2639,7 @@ icmp6 {
>        unSNATted here.
>      </p>
>
> -    <p>Ingress Table 5: UNSNAT on Gateway and Distributed Routers</p>
> +    <p>Ingress Table 4: UNSNAT on Gateway and Distributed Routers</p>
>      <ul>
>        <li>
>          <p>
> @@ -2696,7 +2666,7 @@ icmp6 {
>        </li>
>      </ul>
>
> -    <p>Ingress Table 5: UNSNAT on Gateway Routers</p>
> +    <p>Ingress Table 4: UNSNAT on Gateway Routers</p>
>
>      <ul>
>        <li>
> @@ -2713,9 +2683,10 @@ icmp6 {
>            <code>lb_force_snat_ip=router_ip</code> then for every logical
router
>            port <var>P</var> attached to the Gateway router with the
router ip
>            <var>B</var>, a priority-110 flow is added with the match
> -          <code>inport == <var>P</var> && ip4.dst ==
<var>B</var></code> or
> -          <code>inport == <var>P</var> && ip6.dst ==
<var>B</var></code>
> -          with an action <code>ct_snat; </code>.
> +          <code>inport == <var>P</var> &&
> +          ip4.dst == <var>B</var></code> or <code>inport == <var>P</var>
> +          && ip6.dst == <var>B</var></code> with an action
> +          <code>ct_snat; </code>.
>          </p>
>
>          <p>
> @@ -2745,7 +2716,7 @@ icmp6 {
>        </li>
>      </ul>
>
> -    <p>Ingress Table 5: UNSNAT on Distributed Routers</p>
> +    <p>Ingress Table 4: UNSNAT on Distributed Routers</p>
>
>      <ul>
>        <li>
> @@ -2776,6 +2747,40 @@ icmp6 {
>        </li>
>      </ul>
>
> +    <h3>Ingress Table 5: DEFRAG</h3>
> +
> +    <p>
> +      This is to send packets to connection tracker for tracking and
> +      defragmentation.  It contains a priority-0 flow that simply moves
traffic
> +      to the next table.
> +    </p>
> +
> +    <p>
> +      If load balancing rules with virtual IP addresses (and ports) are
> +      configured in <code>OVN_Northbound</code> database for a Gateway
router,
> +      a priority-100 flow is added for each configured virtual IP address
> +      <var>VIP</var>. For IPv4 <var>VIPs</var> the flow matches <code>ip
> +      && ip4.dst == <var>VIP</var></code>.  For IPv6
<var>VIPs</var>,
> +      the flow matches <code>ip && ip6.dst ==
<var>VIP</var></code>.
> +      The flow applies the action <code>reg0 = <var>VIP</var>

nit: for IPv6 it is xxreg0.

> +      && ct_dnat;</code> to send IP packets to the
> +      connection tracker for packet de-fragmentation and to dnat the
> +      destination IP for the committed connection before sending it to
the
> +      next table.
> +    </p>
> +
> +    <p>
> +      If ECMP routes with symmetric reply are configured in the
> +      <code>OVN_Northbound</code> database for a gateway router, a
priority-300
> +      flow is added for each router port on which symmetric replies are
> +      configured. The matching logic for these ports essentially
reverses the
> +      configured logic of the ECMP route. So for instance, a route with a
> +      destination routing policy will instead match if the source IP
address
> +      matches the static route's prefix. The flow uses the action
> +      <code>ct_next</code> to send IP packets to the connection tracker
for
> +      packet de-fragmentation and tracking before sending it to the next
table.
> +    </p>
> +
>      <h3>Ingress Table 6: DNAT</h3>
>
>      <p>
> @@ -2828,19 +2833,28 @@ icmp6 {
>        </li>
>
>        <li>
> -        For all the configured load balancing rules for a router in
> -        <code>OVN_Northbound</code> database that includes a L4 port
> -        <var>PORT</var> of protocol <var>P</var> and IPv4 or IPv6 address
> -        <var>VIP</var>, a priority-120 flow that matches on
> -        <code>ct.est && ip && ip4.dst == <var>VIP</var>
> -        && <var>P</var> && <var>P</var>.dst == <var>PORT
> -        </var></code> (<code>ip6.dst == <var>VIP</var></code> in the
IPv6 case)
> -        with an action of <code>ct_dnat;</code>. If the router is
> -        configured to force SNAT any load-balanced packets, the above
action
> -        will be replaced by <code>flags.force_snat_for_lb = 1;
ct_dnat;</code>.
> -        If the load balancing rule is configured with
<code>skip_snat</code>
> -        set to true, the above action will be replaced by
> -        <code>flags.skip_snat_for_lb = 1; ct_dnat;</code>.
> +        <p>
> +          For all the configured load balancing rules for a router in
> +          <code>OVN_Northbound</code> database that includes a L4 port
> +          <var>PORT</var> of protocol <var>P</var> and IPv4 or IPv6
address
> +          <var>VIP</var>, a priority-120 flow that matches on
> +          <code>ct.est && ip && reg0 == <var>VIP</var>
> +          && <var>P</var> && <var>P</var>.dst ==
<var>PORT
> +          </var></code> (<code>xxreg0 == <var>VIP</var></code> in the
> +          IPv6 case) with an action of <code>next;</code>. If the router
is
> +          configured to force SNAT any load-balanced packets, the above
action
> +          will be replaced by <code>flags.force_snat_for_lb = 1;
next;</code>.
> +          If the load balancing rule is configured with
<code>skip_snat</code>
> +          set to true, the above action will be replaced by
> +          <code>flags.skip_snat_for_lb = 1; next;</code>.
> +        </p>
> +
> +        <p>
> +          Previous table <code>lr_in_defrag</code> sets the register
> +          <code>reg0</code> (or <code>xxreg0</code> for IPv6) and does
> +          <code>ct_dnat</code>.  Hence for established traffic, this
> +          table just advances the packet to the next stage.
> +        </p>
>        </li>
>
In this section "Ingress Table 6: DNAT", there are several paragraphs with
very similar text but only minor differences on the match condition, such
as "includes just an IP address ...", and you need to update for all those
paragraphs. I understand it is painful to update the redundant information,
and it may be also painful for the readers, but it seems even worse if it
is incorrect. Not sure if there is a better way to maintain this document.
(the DDlog code is easier to understand than the document)

>        <li>
> @@ -3876,7 +3890,26 @@ nd_ns {
>        </li>
>      </ul>
>
> -    <h3>Egress Table 1: SNAT</h3>
> +    <h3>Egress Table 1: Post UNDNAT on Gateway Routers</h3>
> +
> +    <p>
> +      <ul>
> +        <li>
> +          A priority-50 logical flow is added that commits any untracked
flows
> +          from the previous table <code>lr_out_undnat</code>. This flow
> +          matches on <code>ct.new && ip</code> with action
> +          <code>ct_commit { } ; next; </code>.
> +        </li>
> +
> +        <li>
> +          A priority-0 logical flow with match <code>1</code> has actions
> +        <code>next;</code>.
> +        </li>
> +
> +      </ul>
> +    </p>
> +
> +    <h3>Egress Table 2: SNAT</h3>
>
>      <p>
>        Packets that are configured to be SNATed get their source IP
address
> @@ -3892,7 +3925,7 @@ nd_ns {
>        </li>
>      </ul>
>
> -    <p>Egress Table 1: SNAT on Gateway Routers</p>
> +    <p>Egress Table 2: SNAT on Gateway Routers</p>
>
>      <ul>
>        <li>
> @@ -3991,7 +4024,7 @@ nd_ns {
>        </li>
>      </ul>
>
> -    <p>Egress Table 1: SNAT on Distributed Routers</p>
> +    <p>Egress Table 2: SNAT on Distributed Routers</p>
>
>      <ul>
>        <li>
> @@ -4051,7 +4084,7 @@ nd_ns {
>        </li>
>      </ul>
>
> -    <h3>Egress Table 2: Egress Loopback</h3>
> +    <h3>Egress Table 3: Egress Loopback</h3>
>
>      <p>
>        For distributed logical routers where one of the logical router
> @@ -4120,7 +4153,7 @@ clone {
>        </li>
>      </ul>
>
> -    <h3>Egress Table 3: Delivery</h3>
> +    <h3>Egress Table 4: Delivery</h3>
>
>      <p>
>        Packets that reach this table are ready for delivery.  It contains:
> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> index d97ab4a5b39c..27e5fbea9f4f 100644
> --- a/northd/ovn-northd.c
> +++ b/northd/ovn-northd.c
> @@ -187,8 +187,8 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1,
"lr_in_lookup_neighbor") \
>      PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2,
"lr_in_learn_neighbor") \
>      PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")
  \
> -    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          4, "lr_in_defrag")
  \
> -    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          5, "lr_in_unsnat")
  \
> +    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")
  \
> +    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")
  \
>      PIPELINE_STAGE(ROUTER, IN,  DNAT,            6, "lr_in_dnat")
  \
>      PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7,
"lr_in_ecmp_stateful") \
>      PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8,
"lr_in_nd_ra_options") \
> @@ -204,10 +204,11 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     18,
"lr_in_arp_request")  \
>                                                                        \
>      /* Logical router egress stages. */                               \
> -    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,    0, "lr_out_undnat")        \
> -    PIPELINE_STAGE(ROUTER, OUT, SNAT,      1, "lr_out_snat")          \
> -    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,  2, "lr_out_egr_loop")      \
> -    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,  3, "lr_out_delivery")
> +    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,      0, "lr_out_undnat")        \
> +    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT, 1, "lr_out_post_undnat")   \
> +    PIPELINE_STAGE(ROUTER, OUT, SNAT,        2, "lr_out_snat")          \
> +    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,    3, "lr_out_egr_loop")      \
> +    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,    4, "lr_out_delivery")
>
>  #define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
>      S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> @@ -643,6 +644,12 @@ struct ovn_datapath {
>      /* Multicast data. */
>      struct mcast_info mcast_info;
>
> +    /* Applies to only logical router datapath.
> +     * True if logical router is a gateway router. i.e options:chassis
is set.
> +     * If this is true, then 'l3dgw_port' and 'l3redirect_port' will be
> +     * ignored. */
> +    bool is_gw_router;
> +

This looks redundant with the field l3dgw_port. The newly added bool is
used only once, and the other places are still using !l3dgw_port to check
if it is GW router. I suggest either keep using !l3dgw_port or unifying all
the checks to use the new bool.

>      /* OVN northd only needs to know about the logical router gateway
port for
>       * NAT on a distributed router.  This "distributed gateway port" is
>       * populated only when there is a gateway chassis specified for one
of
> @@ -1247,6 +1254,9 @@ join_datapaths(struct northd_context *ctx, struct
hmap *datapaths,
>          init_mcast_info_for_datapath(od);
>          init_nat_entries(od);
>          init_lb_ips(od);
> +        if (smap_get(&od->nbr->options, "chassis")) {
> +            od->is_gw_router = true;
> +        }
>          ovs_list_push_back(lr_list, &od->lr_list);
>      }
>  }
> @@ -8731,20 +8741,33 @@ add_router_lb_flow(struct hmap *lflows, struct
ovn_datapath *od,
>      }
>
>      /* A match and actions for established connections. */
> -    char *est_match = xasprintf("ct.est && %s", ds_cstr(match));
> +    struct ds est_match = DS_EMPTY_INITIALIZER;
> +    ds_put_format(&est_match,
> +                  "ct.est && ip && %sreg0 == %s && ct_label.natted == 1",
> +                  IN6_IS_ADDR_V4MAPPED(&lb_vip->vip) ? "" : "xx",
> +                  lb_vip->vip_str);
> +    if (lb_vip->vip_port) {
> +        ds_put_format(&est_match, " && %s", proto);
> +    }
> +    if (od->l3redirect_port &&
> +        (lb_vip->n_backends || !lb_vip->empty_backend_rej)) {
> +        ds_put_format(&est_match, " && is_chassis_resident(%s)",
> +                      od->l3redirect_port->json_key);
> +    }

This part is a little redundant with constructing the "match" in the caller
of this function. The only difference is matching ip.dst or "reg0/xxreg0".
It is not anything critical but it may be better to put the logic at the
same place, maybe just move the logic from the caller to this function.

>      if (snat_type == FORCE_SNAT || snat_type == SKIP_SNAT) {
> -        char *est_actions = xasprintf("flags.%s_snat_for_lb = 1;
ct_dnat;",
> +        char *est_actions = xasprintf("flags.%s_snat_for_lb = 1; next;",
>                  snat_type == SKIP_SNAT ? "skip" : "force");
>          ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
> -                                est_match, est_actions, &lb->header_);
> +                                ds_cstr(&est_match), est_actions,
> +                                &lb->header_);
>          free(est_actions);
>      } else {
>          ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
> -                                est_match, "ct_dnat;", &lb->header_);
> +                                ds_cstr(&est_match), "next;",
&lb->header_);
>      }
>
>      free(new_match);
> -    free(est_match);
> +    ds_destroy(&est_match);
>
>      const char *ip_match = NULL;
>      if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> @@ -8829,8 +8852,8 @@ add_router_lb_flow(struct hmap *lflows, struct
ovn_datapath *od,
>  static void
>  build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>                         struct hmap *lbs, struct shash *meter_groups,
> -                       struct sset *nat_entries, struct ds *match,
> -                       struct ds *actions)
> +                       struct sset *nat_entries,
> +                       struct ds *match, struct ds *actions)
>  {
>      /* A set to hold all ips that need defragmentation and tracking. */
>      struct sset all_ips = SSET_INITIALIZER(&all_ips);
> @@ -8852,10 +8875,17 @@ build_lrouter_lb_flows(struct hmap *lflows,
struct ovn_datapath *od,
>          for (size_t j = 0; j < lb->n_vips; j++) {
>              struct ovn_lb_vip *lb_vip = &lb->vips[j];
>              struct ovn_northd_lb_vip *lb_vip_nb = &lb->vips_nb[j];
> +
> +            bool is_udp = nullable_string_is_equal(nb_lb->protocol,
"udp");
> +            bool is_sctp = nullable_string_is_equal(nb_lb->protocol,
> +                                                    "sctp");
> +            const char *proto = is_udp ? "udp" : is_sctp ? "sctp" :
"tcp";
> +
>              ds_clear(actions);
>              build_lb_vip_actions(lb_vip, lb_vip_nb, actions,
>                                   lb->selection_fields, false);
>
> +            struct ds defrag_actions = DS_EMPTY_INITIALIZER;
>              if (!sset_contains(&all_ips, lb_vip->vip_str)) {
>                  sset_add(&all_ips, lb_vip->vip_str);
>                  /* If there are any load balancing rules, we should send
> @@ -8867,17 +8897,28 @@ build_lrouter_lb_flows(struct hmap *lflows,
struct ovn_datapath *od,
>                   * 2. If there are L4 ports in load balancing rules, we
>                   *    need the defragmentation to match on L4 ports. */
>                  ds_clear(match);
> +                ds_clear(&defrag_actions);
>                  if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
>                      ds_put_format(match, "ip && ip4.dst == %s",
>                                    lb_vip->vip_str);
> +                    ds_put_format(&defrag_actions, "reg0 = %s; ct_dnat;",
> +                                  lb_vip->vip_str);
>                  } else {
>                      ds_put_format(match, "ip && ip6.dst == %s",
>                                    lb_vip->vip_str);
> +                    ds_put_format(&defrag_actions, "xxreg0 = %s;
ct_dnat;",
> +                                  lb_vip->vip_str);
> +                }
> +
> +                if (lb_vip->vip_port) {
> +                    ds_put_format(match, " && %s", proto);
>                  }
>                  ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DEFRAG,
> -                                        100, ds_cstr(match), "ct_next;",
> +                                        100, ds_cstr(match),
> +                                        ds_cstr(&defrag_actions),
>                                          &nb_lb->header_);
>              }
> +            ds_destroy(&defrag_actions);
>
>              /* Higher priority rules are added for load-balancing in DNAT
>               * table.  For every match (on a VIP[:port]), we add two
flows
> @@ -8886,18 +8927,14 @@ build_lrouter_lb_flows(struct hmap *lflows,
struct ovn_datapath *od,
>               * flow is for ct.est with an action of "ct_dnat;". */
>              ds_clear(match);
>              if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> -                ds_put_format(match, "ip && ip4.dst == %s",
> +                ds_put_format(match, "ip && reg0 == %s",
>                                lb_vip->vip_str);
>              } else {
> -                ds_put_format(match, "ip && ip6.dst == %s",
> +                ds_put_format(match, "ip && xxreg0 == %s",
>                                lb_vip->vip_str);
>              }
>
>              int prio = 110;
> -            bool is_udp = nullable_string_is_equal(nb_lb->protocol,
"udp");
> -            bool is_sctp = nullable_string_is_equal(nb_lb->protocol,
> -                                                    "sctp");
> -            const char *proto = is_udp ? "udp" : is_sctp ? "sctp" :
"tcp";
>
>              if (lb_vip->vip_port) {
>                  ds_put_format(match, " && %s && %s.dst == %d", proto,
> @@ -11400,8 +11437,7 @@ build_lrouter_out_undnat_flow(struct hmap
*lflows, struct ovn_datapath *od,
>      * part of a reply. We undo the DNAT here.
>      *
>      * Note that this only applies for NAT on a distributed router.
> -    * Undo DNAT on a gateway router is done in the ingress DNAT
> -    * pipeline stage. */
> +    */
>      if (!od->l3dgw_port ||
>          (strcmp(nat->type, "dnat") && strcmp(nat->type,
"dnat_and_snat"))) {
>          return;
> @@ -11681,9 +11717,22 @@ build_lrouter_nat_defrag_and_lb(struct
ovn_datapath *od,
>      ovn_lflow_add(lflows, od, S_ROUTER_OUT_SNAT, 0, "1", "next;");
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_DNAT, 0, "1", "next;");
>      ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 0, "1", "next;");
> +    ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 0, "1", "next;");
>      ovn_lflow_add(lflows, od, S_ROUTER_OUT_EGR_LOOP, 0, "1", "next;");
>      ovn_lflow_add(lflows, od, S_ROUTER_IN_ECMP_STATEFUL, 0, "1",
"next;");
>
> +    /* For Gateway routers, if the gateway router has load balancer or
DNAT
> +     * rules, we commit  newly initiated connections in the reply
direction
> +     * to the DNAT zone. This ensures that these flows are tracked. If
the flow
> +     * was not committed, it would produce ongoing datapath flows with
the
> +     * ct.new flag set. Some NICs are unable to offload these flows.
> +     */
> +    if (od->is_gw_router &&
> +        (od->nbr->n_nat || od->nbr->n_load_balancer)) {
> +        ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 50,
> +                        "ip && ct.new", "ct_commit { } ; next; ");
> +    }
> +

Why do the commit for GW routers ONLY? I think it may be better to keep the
behavior consistent for both distributed routers and GW routers. I
understand that distributed routers has flows that match each individual
backends IP [&& port], so there are less chances to go through this stage,
but it is still possible if a backend initiates a connection to external
(when L4 port is not specified for the VIP, or the source port happens to
be the VIP's port). And it seems not harmful to add the flow for
distributed routers if only very few connections would hit this flow.

Moreover, I think it is better to keep the behavior consistent between the
two types of routers also regarding the individual backends IP [&& port]
check. There were control plane scale concerns discussed. However, from
what I observed with DP group feature enabled, the impact should be
minimal. So I'd suggest introducing the same checks for GW router datapaths
before going through the UNDNAT stage, to avoid the extra recirculation for
most use cases. It would be great to have a knob to turn it off when there
is scale concerns. However, please feel free to add it as a follow-up patch
if it is ok.

>      /* Send the IPv6 NS packets to next table. When ovn-controller
>       * generates IPv6 NS (for the action - nd_ns{}), the injected
>       * packet would go through conntrack - which is not required. */
> @@ -11848,18 +11897,12 @@ build_lrouter_nat_defrag_and_lb(struct
ovn_datapath *od,
>                      od->lb_force_snat_addrs.ipv6_addrs[0].addr_s, "lb");
>              }
>          }
> -
> -        /* For gateway router, re-circulate every packet through
> -         * the DNAT zone.  This helps with the following.
> -         *
> -         * Any packet that needs to be unDNATed in the reverse
> -         * direction gets unDNATed. Ideally this could be done in
> -         * the egress pipeline. But since the gateway router
> -         * does not have any feature that depends on the source
> -         * ip address being external IP address for IP routing,
> -         * we can do it here, saving a future re-circulation. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_DNAT, 50,
> -                      "ip", "flags.loopback = 1; ct_dnat;");
> +        /* For gateway router, re-circulate every packet through the
DNAT zone
> +         * so that packets that need to be unDNATed in the reverse
direction
> +         * get unDNATed.
> +         */
> +        ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 50,
> +                "ip", "flags.loopback = 1; ct_dnat;");

This change looks reasonable to me, because I don't understand the comment
above regarding how doing the UNDNAT in the ROUTER_IN_DNAT stage would save
a recirculation.
Could you update this part in the ovn-northd.xml document to reflect this
change as well? It is still mentioned in the document "For Gateway routers,
the unDNAT processing is carried out in the ingress DNAT table."

Thanks,
Han

>      }
>
>      /* Load balancing and packet defrag are only valid on
> diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl
> index 3afa80a3b549..727641ac6ae1 100644
> --- a/northd/ovn_northd.dl
> +++ b/northd/ovn_northd.dl
> @@ -1454,8 +1454,8 @@ function s_ROUTER_IN_ADMISSION():       Stage {
Stage{Ingress,  0, "lr_in_admiss
>  function s_ROUTER_IN_LOOKUP_NEIGHBOR(): Stage { Stage{Ingress,  1,
"lr_in_lookup_neighbor"} }
>  function s_ROUTER_IN_LEARN_NEIGHBOR():  Stage { Stage{Ingress,  2,
"lr_in_learn_neighbor"} }
>  function s_ROUTER_IN_IP_INPUT():        Stage { Stage{Ingress,  3,
"lr_in_ip_input"} }
> -function s_ROUTER_IN_DEFRAG():          Stage { Stage{Ingress,  4,
"lr_in_defrag"} }
> -function s_ROUTER_IN_UNSNAT():          Stage { Stage{Ingress,  5,
"lr_in_unsnat"} }
> +function s_ROUTER_IN_UNSNAT():          Stage { Stage{Ingress,  4,
"lr_in_unsnat"} }
> +function s_ROUTER_IN_DEFRAG():          Stage { Stage{Ingress,  5,
"lr_in_defrag"} }
>  function s_ROUTER_IN_DNAT():            Stage { Stage{Ingress,  6,
"lr_in_dnat"} }
>  function s_ROUTER_IN_ECMP_STATEFUL():   Stage { Stage{Ingress,  7,
"lr_in_ecmp_stateful"} }
>  function s_ROUTER_IN_ND_RA_OPTIONS():   Stage { Stage{Ingress,  8,
"lr_in_nd_ra_options"} }
> @@ -1472,9 +1472,10 @@ function s_ROUTER_IN_ARP_REQUEST():     Stage {
Stage{Ingress, 18, "lr_in_arp_re
>
>  /* Logical router egress stages. */
>  function s_ROUTER_OUT_UNDNAT():         Stage { Stage{ Egress,  0,
"lr_out_undnat"} }
> -function s_ROUTER_OUT_SNAT():           Stage { Stage{ Egress,  1,
"lr_out_snat"} }
> -function s_ROUTER_OUT_EGR_LOOP():       Stage { Stage{ Egress,  2,
"lr_out_egr_loop"} }
> -function s_ROUTER_OUT_DELIVERY():       Stage { Stage{ Egress,  3,
"lr_out_delivery"} }
> +function s_ROUTER_OUT_POST_UNDNAT():    Stage { Stage{ Egress,  1,
"lr_out_post_undnat"} }
> +function s_ROUTER_OUT_SNAT():           Stage { Stage{ Egress,  2,
"lr_out_snat"} }
> +function s_ROUTER_OUT_EGR_LOOP():       Stage { Stage{ Egress,  3,
"lr_out_egr_loop"} }
> +function s_ROUTER_OUT_DELIVERY():       Stage { Stage{ Egress,  4,
"lr_out_delivery"} }
>
>  /*
>   * OVS register usage:
> @@ -2886,7 +2887,8 @@ for (&Switch(._uuid = ls_uuid)) {
>  function get_match_for_lb_key(ip_address: v46_ip,
>                                port: bit<16>,
>                                protocol: Option<string>,
> -                              redundancy: bool): string = {
> +                              redundancy: bool,
> +                              use_nexthop_reg: bool): string = {
>      var port_match = if (port != 0) {
>          var proto = if (protocol == Some{"udp"}) {
>              "udp"
> @@ -2900,8 +2902,18 @@ function get_match_for_lb_key(ip_address: v46_ip,
>      };
>
>      var ip_match = match (ip_address) {
> -        IPv4{ipv4} -> "ip4.dst == ${ipv4}",
> -        IPv6{ipv6} -> "ip6.dst == ${ipv6}"
> +        IPv4{ipv4} ->
> +            if (use_nexthop_reg) {
> +                "${rEG_NEXT_HOP()} == ${ipv4}"
> +            } else {
> +                "ip4.dst == ${ipv4}"
> +            },
> +        IPv6{ipv6} ->
> +            if (use_nexthop_reg) {
> +                "xx${rEG_NEXT_HOP()} == ${ipv6}"
> +            } else {
> +                "ip6.dst == ${ipv6}"
> +            }
>      };
>
>      if (redundancy) { "ip && " } else { "" } ++ ip_match ++ port_match
> @@ -2935,7 +2947,11 @@ function build_lb_vip_actions(lbvip:
Intern<LBVIPWithStatus>,
>      for (pair in lbvip.backends) {
>          (var backend, var up) = pair;
>          if (up) {
> -
 up_backends.insert("${backend.ip.to_bracketed_string()}:${backend.port}")
> +            if (backend.port != 0) {
> +
 up_backends.insert("${backend.ip.to_bracketed_string()}:${backend.port}")
> +            } else {
> +                up_backends.insert("${backend.ip.to_bracketed_string()}")
> +            }
>          }
>      };
>
> @@ -2981,7 +2997,7 @@ Flow(.logical_datapath = sw._uuid,
>
>          build_lb_vip_actions(lbvip, s_SWITCH_OUT_QOS_MARK(), actions0 ++
actions1)
>      },
> -    var __match = "ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr,
lbvip.vip_port, lb.protocol, false).
> +    var __match = "ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr,
lbvip.vip_port, lb.protocol, false, false).
>
>  /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
>   * Packets that don't need hairpinning should continue processing.
> @@ -3019,7 +3035,7 @@ for (&Switch(._uuid = ls_uuid, .has_lb_vip = true))
{
>           .__match = "ip && ct.new && ct.trk && ${rEGBIT_HAIRPIN()} == 1",
>           .actions = "ct_snat_to_vip; next;",
>           .external_ids = stage_hint(ls_uuid));
> -
> +
>      /* If packet needs to be hairpinned, for established sessions there
>       * should already be an SNAT conntrack entry.
>       */
> @@ -5379,13 +5395,14 @@ function default_allow_flow(datapath: uuid,
stage: Stage): Flow {
>           .actions          = "next;",
>           .external_ids     = map_empty()}
>  }
> -for (&Router(._uuid = lr_uuid)) {
> +for (r in &Router(._uuid = lr_uuid)) {
>      /* Packets are allowed by default. */
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DEFRAG())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_UNSNAT())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_SNAT())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DNAT())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_UNDNAT())];
> +    Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_POST_UNDNAT())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_EGR_LOOP())];
>      Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_ECMP_STATEFUL())];
>
> @@ -5400,6 +5417,25 @@ for (&Router(._uuid = lr_uuid)) {
>           .external_ids     = map_empty())
>  }
>
> +for (r in &Router(._uuid = lr_uuid,
> +                  .is_gateway = is_gateway,
> +                  .nat = nat,
> +                  .load_balancer = load_balancer)
> +     if is_gateway and (not is_empty(nat) or not
is_empty(load_balancer))) {
> +    /* For Gateway routers, if the gateway router has load balancer or
DNAT
> +     * rules, we commit  newly initiated connections in the reply
direction
> +     * to the DNAT zone. This ensures that these flows are tracked. If
the flow
> +     * was not committed, it would produce ongoing datapath flows with
the
> +     * ct.new flag set. Some NICs are unable to offload these flows.
> +     */
> +    Flow(.logical_datapath = lr_uuid,
> +        .stage            = s_ROUTER_OUT_POST_UNDNAT(),
> +        .priority         = 50,
> +        .__match          = "ip && ct.new",
> +        .actions          = "ct_commit { } ; next; ",
> +        .external_ids     = map_empty())
> +}
> +
>  Flow(.logical_datapath = lr,
>       .stage            = s_ROUTER_OUT_SNAT(),
>       .priority         = 120,
> @@ -5438,7 +5474,7 @@ function lrouter_nat_add_ext_ip_match(
>          Some{AllowedExtIps{__as}} -> (" && ${ipX}.${dir} == $${__as.name}",
None),
>          Some{ExemptedExtIps{__as}} -> {
>              /* Priority of logical flows corresponding to
exempted_ext_ips is
> -             * +1 of the corresponding regulr NAT rule.
> +             * +1 of the corresponding regular NAT rule.
>               * For example, if we have following NAT rule and we
associate
>               * exempted external ips to it:
>               * "ovn-nbctl lr-nat-add router dnat_and_snat 10.15.24.139
50.0.0.11"
> @@ -5746,8 +5782,7 @@ for (r in &Router(._uuid = lr_uuid,
>               * part of a reply. We undo the DNAT here.
>               *
>               * Note that this only applies for NAT on a distributed
router.
> -             * Undo DNAT on a gateway router is done in the ingress DNAT
> -             * pipeline stage. */
> +             */
>              if ((nat.nat.__type == "dnat" or nat.nat.__type ==
"dnat_and_snat")) {
>                  Some{var gwport} = l3dgw_port in
>                  var __match =
> @@ -5953,16 +5988,11 @@ for (r in &Router(._uuid = lr_uuid,
>                                      .context = "lb");
>
>         /* For gateway router, re-circulate every packet through
> -        * the DNAT zone.  This helps with the following.
> -        *
> -        * Any packet that needs to be unDNATed in the reverse
> -        * direction gets unDNATed. Ideally this could be done in
> -        * the egress pipeline. But since the gateway router
> -        * does not have any feature that depends on the source
> -        * ip address being external IP address for IP routing,
> -        * we can do it here, saving a future re-circulation. */
> +        * the DNAT zone so that packets that need to be unDNATed in the
reverse
> +        * direction get unDNATed.
> +        */
>          Flow(.logical_datapath = lr_uuid,
> -             .stage            = s_ROUTER_IN_DNAT(),
> +             .stage            = s_ROUTER_OUT_UNDNAT(),
>               .priority         = 50,
>               .__match          = "ip",
>               .actions          = "flags.loopback = 1; ct_dnat;",
> @@ -6024,7 +6054,16 @@ for (RouterLBVIP(
>           *    pick a DNAT ip address from a group.
>           * 2. If there are L4 ports in load balancing rules, we
>           *    need the defragmentation to match on L4 ports. */
> -        var __match = "ip && ${ipX}.dst == ${ip_address}" in
> +        var match1 = "ip && ${ipX}.dst == ${ip_address}" in
> +        var match2 =
> +            if (port != 0) {
> +                " && ${proto}"
> +            } else {
> +                ""
> +            } in
> +        var __match = match1 ++ match2 in
> +        var xx = ip_address.xxreg() in
> +        var __actions = "${xx}${rEG_NEXT_HOP()} = ${ip_address};
ct_dnat;" in
>          /* One of these flows must be created for each unique LB VIP
address.
>           * We create one for each VIP:port pair; flows with the same IP
and
>           * different port numbers will produce identical flows that will
> @@ -6033,7 +6072,7 @@ for (RouterLBVIP(
>               .stage            = s_ROUTER_IN_DEFRAG(),
>               .priority         = 100,
>               .__match          = __match,
> -             .actions          = "ct_next;",
> +             .actions          = __actions,
>               .external_ids     = stage_hint(lb._uuid));
>
>          /* Higher priority rules are added for load-balancing in DNAT
> @@ -6041,7 +6080,8 @@ for (RouterLBVIP(
>           * via add_router_lb_flow().  One flow is for specific matching
>           * on ct.new with an action of "ct_lb($targets);".  The other
>           * flow is for ct.est with an action of "ct_dnat;". */
> -        var match1 = "ip && ${ipX}.dst == ${ip_address}" in
> +        var xx = ip_address.xxreg() in
> +        var match1 = "ip && ${xx}${rEG_NEXT_HOP()} == ${ip_address}" in
>          (var prio, var match2) =
>              if (port != 0) {
>                  (120, " && ${proto} && ${proto}.dst == ${port}")
> @@ -6056,12 +6096,21 @@ for (RouterLBVIP(
>          var snat_for_lb = snat_for_lb(r.options, lb) in
>          {
>              /* A match and actions for established connections. */
> -            var est_match = "ct.est && " ++ __match in
> +            var est_match = "ct.est && " ++ match1 ++ " &&
ct_label.natted == 1" ++
> +                if (port != 0) {
> +                    " && ${proto}"
> +                } else {
> +                    ""
> +                } ++
> +                match ((l3dgw_port, backends != "" or
lb.options.get_bool_def("reject", false))) {
> +                    (Some{gwport}, true) -> " &&
is_chassis_resident(${redirect_port_name})",
> +                    _ -> ""
> +                } in
>              var actions =
>                  match (snat_for_lb) {
> -                    SkipSNAT -> "flags.skip_snat_for_lb = 1; ct_dnat;",
> -                    ForceSNAT -> "flags.force_snat_for_lb = 1; ct_dnat;",
> -                    _ -> "ct_dnat;"
> +                    SkipSNAT -> "flags.skip_snat_for_lb = 1; next;",
> +                    ForceSNAT -> "flags.force_snat_for_lb = 1; next;",
> +                    _ -> "next;"
>                  } in
>              Flow(.logical_datapath = lr_uuid,
>                   .stage            = s_ROUTER_IN_DNAT(),
> @@ -6152,7 +6201,7 @@ Flow(.logical_datapath = r._uuid,
>      r.load_balancer.contains(lb._uuid),
>      var __match
>          = "ct.new && " ++
> -          get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port,
lb.protocol, true) ++
> +          get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port,
lb.protocol, true, true) ++
>            match (r.l3dgw_port) {
>                Some{gwport} -> " &&
is_chassis_resident(${r.redirect_port_name})",
>                _ -> ""
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index d81975cb18a6..accaa87033f4 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -1406,40 +1406,39 @@ AT_SETUP([ovn -- Load balancer VIP in NAT
entries])
>  AT_SKIP_IF([test $HAVE_PYTHON = no])
>  ovn_start
>
> -ovn-nbctl lr-add lr0
> -ovn-nbctl lrp-add lr0 lr0-public 00:00:01:01:02:04 192.168.2.1/24
> -ovn-nbctl lrp-add lr0 lr0-join 00:00:01:01:02:04 10.10.0.1/24
> +check ovn-nbctl lr-add lr0
> +check ovn-nbctl lrp-add lr0 lr0-public 00:00:01:01:02:04 192.168.2.1/24
> +check ovn-nbctl lrp-add lr0 lr0-join 00:00:01:01:02:04 10.10.0.1/24
>
> -ovn-nbctl set logical_router lr0 options:chassis=ch1
> +check ovn-nbctl set logical_router lr0 options:chassis=ch1
>
> -ovn-nbctl lb-add lb1 "192.168.2.1:8080" "10.0.0.4:8080"
> -ovn-nbctl lb-add lb2 "192.168.2.4:8080" "10.0.0.5:8080" udp
> -ovn-nbctl lb-add lb3 "192.168.2.5:8080" "10.0.0.6:8080"
> -ovn-nbctl lb-add lb4 "192.168.2.6:8080" "10.0.0.7:8080"
> +check ovn-nbctl lb-add lb1 "192.168.2.1:8080" "10.0.0.4:8080"
> +check ovn-nbctl lb-add lb2 "192.168.2.4:8080" "10.0.0.5:8080" udp
> +check ovn-nbctl lb-add lb3 "192.168.2.5:8080" "10.0.0.6:8080"
> +check ovn-nbctl lb-add lb4 "192.168.2.6:8080" "10.0.0.7:8080"
>
> -ovn-nbctl lr-lb-add lr0 lb1
> -ovn-nbctl lr-lb-add lr0 lb2
> -ovn-nbctl lr-lb-add lr0 lb3
> -ovn-nbctl lr-lb-add lr0 lb4
> +check ovn-nbctl lr-lb-add lr0 lb1
> +check ovn-nbctl lr-lb-add lr0 lb2
> +check ovn-nbctl lr-lb-add lr0 lb3
> +check ovn-nbctl lr-lb-add lr0 lb4
>
> -ovn-nbctl lr-nat-add lr0 snat 192.168.2.1 10.0.0.0/24
> -ovn-nbctl lr-nat-add lr0 dnat_and_snat 192.168.2.4 10.0.0.4
> +check ovn-nbctl lr-nat-add lr0 snat 192.168.2.1 10.0.0.0/24
> +check ovn-nbctl lr-nat-add lr0 dnat_and_snat 192.168.2.4 10.0.0.4
>  check ovn-nbctl --wait=sb lr-nat-add lr0 dnat 192.168.2.5 10.0.0.5
>
>  ovn-sbctl dump-flows lr0 > sbflows
>  AT_CAPTURE_FILE([sbflows])
>
> -OVS_WAIT_UNTIL([test 1 = $(grep lr_in_unsnat sbflows | \
> -grep "ip4 && ip4.dst == 192.168.2.1 && tcp && tcp.dst == 8080" -c) ])
> -
> -AT_CHECK([test 1 = $(grep lr_in_unsnat sbflows | \
> -grep "ip4 && ip4.dst == 192.168.2.4 && udp && udp.dst == 8080" -c) ])
> -
> -AT_CHECK([test 1 = $(grep lr_in_unsnat sbflows | \
> -grep "ip4 && ip4.dst == 192.168.2.5 && tcp && tcp.dst == 8080" -c) ])
> -
> -AT_CHECK([test 0 = $(grep lr_in_unsnat sbflows | \
> -grep "ip4 && ip4.dst == 192.168.2.6 && tcp && tcp.dst == 8080" -c) ])
> +# There shoule be no flows for LB VIPs in lr_in_unsnat if the VIP is not
a
> +# dnat_and_snat or snat entry.
> +AT_CHECK([grep "lr_in_unsnat" sbflows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst
== 192.168.2.1 && tcp && tcp.dst == 8080), action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst
== 192.168.2.4 && udp && udp.dst == 8080), action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst
== 192.168.2.5 && tcp && tcp.dst == 8080), action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
192.168.2.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
192.168.2.4), action=(ct_snat;)
> +])
>
>  AT_CLEANUP
>  ])
> @@ -1458,8 +1457,8 @@ ovn-nbctl set logical_router lr0
options:dnat_force_snat_ip=192.168.2.3
>  ovn-nbctl --wait=sb sync
>
>  AT_CHECK([ovn-sbctl lflow-list lr0 | grep lr_in_unsnat | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst
== 192.168.2.3), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst
== 192.168.2.3), action=(ct_snat;)
>  ])
>
>  AT_CLEANUP
> @@ -3163,14 +3162,28 @@ ovn-sbctl dump-flows lr0 > lr0flows
>  AT_CAPTURE_FILE([lr0flows])
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
>  ])
>
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(ct_dnat;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(ct_lb(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80),
action=(ct_lb(backends=10.0.0.4:8080);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
>  ])
>
>  check ovn-nbctl --wait=sb set logical_router lr0
options:lb_force_snat_ip="20.0.0.4 aef0::4"
> @@ -3180,23 +3193,37 @@ AT_CAPTURE_FILE([lr0flows])
>
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst
== 20.0.0.4), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(ip6 && ip6.dst
== aef0::4), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst
== 20.0.0.4), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(ip6 && ip6.dst
== aef0::4), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
>  ])
>
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_dnat;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
>  ])
>
>  AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> -  table=1 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> -  table=1 (lr_out_snat        ), priority=100  ,
match=(flags.force_snat_for_lb == 1 && ip4), action=(ct_snat(20.0.0.4);)
> -  table=1 (lr_out_snat        ), priority=100  ,
match=(flags.force_snat_for_lb == 1 && ip6), action=(ct_snat(aef0::4);)
> -  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=100  ,
match=(flags.force_snat_for_lb == 1 && ip4), action=(ct_snat(20.0.0.4);)
> +  table=2 (lr_out_snat        ), priority=100  ,
match=(flags.force_snat_for_lb == 1 && ip6), action=(ct_snat(aef0::4);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
>  ])
>
>  check ovn-nbctl --wait=sb set logical_router lr0
options:lb_force_snat_ip="router_ip"
> @@ -3208,25 +3235,39 @@ AT_CHECK([grep "lr_in_ip_input" lr0flows | grep
"priority=60" | sort], [0], [dnl
>  ])
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
>  ])
>
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_dnat;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
>  ])
>
>  AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> -  table=1 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.100);)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"),
action=(ct_snat(20.0.0.1);)
> -  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.100);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"),
action=(ct_snat(20.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
>  ])
>
>  check ovn-nbctl --wait=sb remove logical_router lr0 options chassis
> @@ -3235,12 +3276,12 @@ ovn-sbctl dump-flows lr0 > lr0flows
>  AT_CAPTURE_FILE([lr0flows])
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
>  ])
>
>  AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> -  table=1 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> -  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
>  ])
>
>  check ovn-nbctl set logical_router lr0 options:chassis=ch1
> @@ -3250,27 +3291,41 @@ ovn-sbctl dump-flows lr0 > lr0flows
>  AT_CAPTURE_FILE([lr0flows])
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
>  ])
>
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
>    table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_dnat;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
>  ])
>
>  AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> -  table=1 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.100);)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"),
action=(ct_snat(20.0.0.1);)
> -  table=1 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"),
action=(ct_snat(bef0::1);)
> -  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.100);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"),
action=(ct_snat(20.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"),
action=(ct_snat(bef0::1);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
>  ])
>
>  check ovn-nbctl --wait=sb lb-add lb2 10.0.0.20:80 10.0.0.40:8080
> @@ -3280,20 +3335,35 @@ check ovn-nbctl --wait=sb lb-del lb1
>  ovn-sbctl dump-flows lr0 > lr0flows
>
>  AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> -  table=5 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> -  table=5 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.20 && tcp), action=(reg0 = 10.0.0.20; ct_dnat;)
>  ])
>
>  AT_CHECK([grep "lr_in_dnat" lr0flows | grep skip_snat_for_lb | sort],
[0], [dnl
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
ip4.dst == 10.0.0.20 && tcp && tcp.dst == 80),
action=(flags.skip_snat_for_lb = 1; ct_dnat;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.20 && tcp && tcp.dst == 80),
action=(flags.skip_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.20 && ct_label.natted == 1 && tcp),
action=(flags.skip_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.20 && tcp && tcp.dst == 80), action=(flags.skip_snat_for_lb
= 1; ct_lb(backends=10.0.0.40:8080);)
>  ])
>
>  AT_CHECK([grep "lr_out_snat" lr0flows | grep skip_snat_for_lb | sort],
[0], [dnl
> -  table=1 (lr_out_snat        ), priority=120  ,
match=(flags.skip_snat_for_lb == 1 && ip), action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  ,
match=(flags.skip_snat_for_lb == 1 && ip), action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
>  ])
>
>  AT_CLEANUP
> @@ -3737,3 +3807,451 @@ AT_CHECK([ovn-trace --minimal 'inport ==
"sw1-port1" && eth.src == 50:54:00:00:0
>
>  AT_CLEANUP
>  ])
> +
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([ovn -- LR NAT flows])
> +ovn_start
> +
> +check ovn-nbctl \
> +    -- ls-add sw0 \
> +    -- lb-add lb0 10.0.0.10:80 10.0.0.4:8080 \
> +    -- ls-lb-add sw0 lb0
> +
> +check ovn-nbctl lr-add lr0
> +check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24
> +check ovn-nbctl lsp-add sw0 sw0-lr0
> +check ovn-nbctl lsp-set-type sw0-lr0 router
> +check ovn-nbctl lsp-set-addresses sw0-lr0 00:00:00:00:ff:01
> +check ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0
> +
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +])
> +
> +# Create few dnat_and_snat entries
> +
> +check ovn-nbctl lr-nat-add lr0 snat 172.168.0.10 10.0.0.0/24
> +check ovn-nbctl lr-nat-add lr0 dnat_and_snat 172.168.0.20 10.0.0.3
> +check ovn-nbctl lr-nat-add lr0 snat 172.168.0.30 10.0.0.10
> +
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +])
> +
> +ovn-sbctl chassis-add gw1 geneve 127.0.0.1
> +
> +# Create a distributed gw port on lr0
> +check ovn-nbctl ls-add public
> +check ovn-nbctl lrp-add lr0 lr0-public 00:00:00:00:ff:02 172.168.0.10/24
> +check ovn-nbctl lrp-set-gateway-chassis lr0-public gw1
> +
> +ovn-nbctl lsp-add public public-lr0 -- set Logical_Switch_Port
public-lr0 \
> +    type=router options:router-port=lr0-public \
> +    -- lsp-set-addresses public-lr0 router
> +
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.10 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.30 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=100  , match=(ip && ip4.src ==
10.0.0.3 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=153  , match=(ip && ip4.src ==
10.0.0.0/24 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src ==
10.0.0.10 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src ==
10.0.0.3 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
> +])
> +
> +# Associate load balancer to lr0
> +
> +check ovn-nbctl lb-add lb0 172.168.0.100:8082 "10.0.0.50:82,10.0.0.60:82"
> +
> +# No L4
> +check ovn-nbctl lb-add lb1 172.168.0.200 "10.0.0.80,10.0.0.81"
> +check ovn-nbctl lb-add lb2 172.168.0.210:60 "10.0.0.50:6062,
10.0.0.60:6062" udp
> +
> +check ovn-nbctl lr-lb-add lr0 lb0
> +check ovn-nbctl lr-lb-add lr0 lb1
> +check ovn-nbctl lr-lb-add lr0 lb2
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.10 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.30 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20 && inport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip &&
reg0 == 172.168.0.200 && ct_label.natted == 1 &&
is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip &&
reg0 == 172.168.0.200 && is_chassis_resident("cr-lr0-public")),
action=(ct_lb(backends=10.0.0.80,10.0.0.81);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp &&
is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp &&
is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.210 && ct_label.natted == 1 && udp &&
is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80 &&
is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.4:8080
);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.100 && tcp && tcp.dst == 8082 &&
is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.210 && udp && udp.dst == 60 &&
is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.50:6062
,10.0.0.60:6062);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=100  , match=(ip && ip4.src ==
10.0.0.3 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
> +  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src
== 10.0.0.4 && tcp.src == 8080)) && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
> +  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src
== 10.0.0.50 && tcp.src == 82) || (ip4.src == 10.0.0.60 && tcp.src == 82))
&& outport == "lr0-public" && is_chassis_resident("cr-lr0-public")),
action=(ct_dnat;)
> +  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src
== 10.0.0.50 && udp.src == 6062) || (ip4.src == 10.0.0.60 && udp.src ==
6062)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")),
action=(ct_dnat;)
> +  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src
== 10.0.0.80) || (ip4.src == 10.0.0.81)) && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=153  , match=(ip && ip4.src ==
10.0.0.0/24 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src ==
10.0.0.10 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src ==
10.0.0.3 && outport == "lr0-public" &&
is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
> +])
> +
> +# Make the logical router as Gateway router
> +check ovn-nbctl clear logical_router_port lr0-public gateway_chassis
> +check ovn-nbctl set logical_router lr0 options:chassis=gw1
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.20), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.30), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip &&
reg0 == 172.168.0.200 && ct_label.natted == 1), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip &&
reg0 == 172.168.0.200), action=(ct_lb(backends=10.0.0.80,10.0.0.81);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.210 && ct_label.natted == 1 && udp), action=(next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80),
action=(ct_lb(backends=10.0.0.4:8080);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.100 && tcp && tcp.dst == 8082), action=(ct_lb(backends=
10.0.0.50:82,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.210 && udp && udp.dst == 60), action=(ct_lb(backends=
10.0.0.50:6062,10.0.0.60:6062);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src ==
10.0.0.0/24), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.10), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.3), action=(ct_snat(172.168.0.20);)
> +])
> +
> +# Set lb force snat logical router.
> +check ovn-nbctl --wait=sb set logical_router lr0
options:lb_force_snat_ip="router_ip"
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.20), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.30), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip &&
reg0 == 172.168.0.200 && ct_label.natted == 1),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip &&
reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1;
ct_lb(backends=10.0.0.80,10.0.0.81);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.210 && ct_label.natted == 1 && udp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.100 && tcp && tcp.dst == 8082),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.210 && udp && udp.dst == 60),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062
,10.0.0.60:6062);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src ==
10.0.0.0/24), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.10), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.3), action=(ct_snat(172.168.0.20);)
> +])
> +
> +# Add a LB VIP same as router ip.
> +check ovn-nbctl lb-add lb0 172.168.0.10:9082 "10.0.0.50:82,10.0.0.60:82"
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst
== 172.168.0.10 && tcp && tcp.dst == 9082), action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.20), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.30), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.10 && tcp), action=(reg0 = 172.168.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip &&
reg0 == 172.168.0.200 && ct_label.natted == 1),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip &&
reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1;
ct_lb(backends=10.0.0.80,10.0.0.81);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.210 && ct_label.natted == 1 && udp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.10 && tcp && tcp.dst == 9082),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.100 && tcp && tcp.dst == 8082),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.210 && udp && udp.dst == 60),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062
,10.0.0.60:6062);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src ==
10.0.0.0/24), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.10), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.3), action=(ct_snat(172.168.0.20);)
> +])
> +
> +# Add IPv6 router port and LB.
> +check ovn-nbctl lrp-del lr0-sw0
> +check ovn-nbctl lrp-del lr0-public
> +check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 aef0::1
> +check ovn-nbctl lrp-add lr0 lr0-public 00:00:00:00:ff:02 172.168.0.10/24
def0::10
> +
> +lb1_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb1)
> +ovn-nbctl set load_balancer $lb1_uuid
vips:'"[[def0::2]]:8000"'='"@<:@aef0::2@:>@:80,@<:@aef0::3@:>@:80"'
> +
> +ovn-nbctl list load_Balancer
> +check ovn-nbctl --wait=sb sync
> +
> +ovn-sbctl dump-flows lr0 > lr0flows
> +AT_CAPTURE_FILE([lr0flows])
> +
> +AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
> +  table=4 (lr_in_unsnat       ), priority=0    , match=(1),
action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-public" && ip6.dst == def0::10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=110  , match=(inport ==
"lr0-sw0" && ip6.dst == aef0::1), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst
== 172.168.0.10 && tcp && tcp.dst == 9082), action=(next;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.10), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.20), action=(ct_snat;)
> +  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst ==
172.168.0.30), action=(ct_snat;)
> +])
> +
> +AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
> +  table=5 (lr_in_defrag       ), priority=0    , match=(1),
action=(next;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.10 && tcp), action=(reg0 = 172.168.0.10; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst ==
172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
> +  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip6.dst ==
def0::2 && tcp), action=(xxreg0 = def0::2; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> +  table=6 (lr_in_dnat         ), priority=0    , match=(1),
action=(next;)
> +  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst ==
172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip &&
reg0 == 172.168.0.200 && ct_label.natted == 1),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip &&
reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1;
ct_lb(backends=10.0.0.80,10.0.0.81);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.10 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
reg0 == 172.168.0.210 && ct_label.natted == 1 && udp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip &&
xxreg0 == def0::2 && ct_label.natted == 1 && tcp),
action=(flags.force_snat_for_lb = 1; next;)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb
= 1; ct_lb(backends=10.0.0.4:8080);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.10 && tcp && tcp.dst == 9082),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.100 && tcp && tcp.dst == 8082),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82
,10.0.0.60:82);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
reg0 == 172.168.0.210 && udp && udp.dst == 60),
action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062
,10.0.0.60:6062);)
> +  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
xxreg0 == def0::2 && tcp && tcp.dst == 8000),
action=(flags.force_snat_for_lb = 1;
ct_lb(backends=[[aef0::2]]:80,[[aef0::3]]:80);)
> +])
> +
> +AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
> +  table=0 (lr_out_undnat      ), priority=0    , match=(1),
action=(next;)
> +  table=0 (lr_out_undnat      ), priority=50   , match=(ip),
action=(flags.loopback = 1; ct_dnat;)
> +])
> +
> +AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
> +  table=1 (lr_out_post_undnat ), priority=0    , match=(1),
action=(next;)
> +  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new),
action=(ct_commit { } ; next; )
> +])
> +
> +AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
> +  table=2 (lr_out_snat        ), priority=0    , match=(1),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"),
action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"),
action=(ct_snat(10.0.0.1);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"),
action=(ct_snat(def0::10);)
> +  table=2 (lr_out_snat        ), priority=110  ,
match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"),
action=(ct_snat(aef0::1);)
> +  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns),
action=(next;)
> +  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src ==
10.0.0.0/24), action=(ct_snat(172.168.0.10);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.10), action=(ct_snat(172.168.0.30);)
> +  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src ==
10.0.0.3), action=(ct_snat(172.168.0.20);)
> +])
> +
> +AT_CLEANUP
> +])
> diff --git a/tests/ovn.at b/tests/ovn.at
> index bc494fcad9bb..ea1593197f21 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -20571,7 +20571,7 @@ AT_CAPTURE_FILE([sbflows2])
>  OVS_WAIT_FOR_OUTPUT(
>    [ovn-sbctl dump-flows > sbflows2
>     ovn-sbctl dump-flows lr0 | grep ct_lb | grep priority=120 | sed
's/table=..//'], 0,
> -  [  (lr_in_dnat         ), priority=120  , match=(ct.new && ip &&
ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80 &&
is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.3:80,
20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
> +  [  (lr_in_dnat         ), priority=120  , match=(ct.new && ip && reg0
== 10.0.0.10 && tcp && tcp.dst == 80 &&
is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.3:80,
20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
>  ])
>
>  # get the svc monitor mac.
> @@ -20612,8 +20612,8 @@ AT_CHECK(
>  AT_CAPTURE_FILE([sbflows4])
>  ovn-sbctl dump-flows lr0 > sbflows4
>  AT_CHECK([grep lr_in_dnat sbflows4 | grep priority=120 | sed
's/table=..//' | sort], [0], [dnl
> -  (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst
== 10.0.0.10 && tcp && tcp.dst == 80 &&
is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
> -  (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst
== 10.0.0.10 && tcp && tcp.dst == 80 &&
is_chassis_resident("cr-lr0-public")), action=(drop;)
> +  (lr_in_dnat         ), priority=120  , match=(ct.est && ip && reg0 ==
10.0.0.10 && ct_label.natted == 1 && tcp &&
is_chassis_resident("cr-lr0-public")), action=(next;)
> +  (lr_in_dnat         ), priority=120  , match=(ct.new && ip && reg0 ==
10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")),
action=(drop;)
>  ])
>
>  # Delete sw0-p1
> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
> index 552fdae52665..4f104171bdba 100644
> --- a/tests/system-ovn.at
> +++ b/tests/system-ovn.at
> @@ -116,6 +116,7 @@ NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2
30.0.0.2 | FORMAT_PING], \
>  # Check conntrack entries.
>  AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.2) | \
>  sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
>
+icmp,orig=(src=172.16.1.2,dst=192.168.1.2,id=<cleared>,type=8,code=0),reply=(src=192.168.1.2,dst=172.16.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
>
 icmp,orig=(src=172.16.1.2,dst=30.0.0.2,id=<cleared>,type=8,code=0),reply=(src=192.168.1.2,dst=172.16.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
>  ])
>
> @@ -298,6 +299,7 @@ NS_CHECK_EXEC([alice1], [ping6 -q -c 3 -i 0.3 -w 2
fd30::2 | FORMAT_PING], \
>  # Check conntrack entries.
>  AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd21::2) | \
>  sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
>
+icmpv6,orig=(src=fd21::2,dst=fd11::2,id=<cleared>,type=128,code=0),reply=(src=fd11::2,dst=fd21::2,id=<cleared>,type=129,code=0),zone=<cleared>
>
 icmpv6,orig=(src=fd21::2,dst=fd30::2,id=<cleared>,type=128,code=0),reply=(src=fd11::2,dst=fd21::2,id=<cleared>,type=129,code=0),zone=<cleared>
>  ])
>
> @@ -2197,11 +2199,12 @@
tcp,orig=(src=172.16.1.2,dst=30.0.0.2,sport=<cleared>,dport=<cleared>),reply=(sr
>  ])
>
>  check_est_flows () {
> -    n=$(ovs-ofctl dump-flows br-int table=15 | grep \
>
-"priority=120,ct_state=+est+trk,tcp,metadata=0x2,nw_dst=30.0.0.2,tp_dst=8000"
\
> -| grep nat | sed -n 's/.*n_packets=\([[0-9]]\{1,\}\).*/\1/p')
> +    n=$(ovs-ofctl dump-flows br-int table=13 | grep \
> +"priority=100,tcp,metadata=0x2,nw_dst=30.0.0.2" | grep nat |
> +sed -n 's/.*n_packets=\([[0-9]]\{1,\}\).*/\1/p')
>
>      echo "n_packets=$n"
> +    test ! -z $n
>      test "$n" != 0
>  }
>
> @@ -2222,7 +2225,7 @@ ovn-nbctl set load_balancer $uuid vips:'"
30.0.0.2:8000"'='"192.168.1.2:80,192.16
>
>  ovn-nbctl list load_balancer
>  ovn-sbctl dump-flows R2
> -OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=41 | \
> +OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=42 | \
>  grep 'nat(src=20.0.0.2)'])
>
>  dnl Test load-balancing that includes L4 ports in NAT.
> @@ -2260,7 +2263,7 @@ ovn-nbctl set load_balancer $uuid vips:'"
30.0.0.2:8000"'='"192.168.1.2:80,192.16
>
>  ovn-nbctl list load_balancer
>  ovn-sbctl dump-flows R2
> -OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=41 | \
> +OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=42 | \
>  grep 'nat(src=20.0.0.2)'])
>
>  rm -f wget*.log
> --
> 2.27.0
>
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>


More information about the dev mailing list