[ovs-dev] [PATCH ovn v3 2/4] northd: Refactor Logical Flows for routers with DNAT/Load Balancers

Mark Gray mark.d.gray at redhat.com
Fri Jul 2 08:57:21 UTC 2021


This patch addresses a number of interconnected issues with Gateway Routers
that have Load Balancing enabled:

1) In the router pipeline, we have the following stages to handle
dnat and unsnat.

 - Stage 4 : lr_in_defrag (dnat zone)
 - Stage 5 : lr_in_unsnat (snat zone)
 - Stage 6 : lr_in_dnat   (dnat zone)

In the reply direction, the order of traversal of the tables
"lr_in_defrag", "lr_in_unsnat" and "lr_in_dnat" adds incorrect
datapath flows that check ct_state in the wrong conntrack zone.
This is illustrated below where reply trafic enters the physical host
port (6) and traverses DNAT zone (14), SNAT zone (default), back to the
DNAT zone and then on to Logical Switch Port zone (22). The third
flow is incorrectly checking the state from the SNAT zone instead
of the DNAT zone.

recirc_id(0),in_port(6),ct_state(-new-est-rel-rpl-trk) actions:ct_clear,ct(zone=14),recirc(0xf)
recirc_id(0xf),in_port(6) actions:ct(nat),recirc(0x10)
recirc_id(0x10),in_port(6),ct_state(-new+est+trk) actions:ct(zone=14,nat),recirc(0x11)
recirc_id(0x11),in_port(6),ct_state(+new-est-rel-rpl+trk) actions: ct(zone=22,nat),recirc(0x12)
recirc_id(0x12),in_port(6),ct_state(-new+est-rel+rpl+trk) actions:5

Update the order of these tables to resolve this.

2) Efficiencies can be gained by using the ct_dnat action in the
table "lr_in_defrag" instead of ct_next. This removes the need for the
ct_dnat action for established Load Balancer flows avoiding a
recirculation.

3) On a Gateway router with DNAT flows configured, the router will translate
the destination IP address from (A) to (B). Reply packets from (B) are
correctly UNDNATed in the reverse direction.

However, if a new connection is established from (B), this flow is never
committed to conntrack and, as such, is never established. This will
cause OVS datapath flows to be added that match on the ct.new flag.

For software-only datapaths this is not a problem. However, for
datapaths that offload these flows to hardware, this may be problematic
as some devices are unable to offload flows that match on ct.new.

This patch resolves this by committing these flows to the DNAT zone in
the new "lr_out_post_undnat" stage. Although this could be done in the
DNAT zone, by doing this in the new zone we can avoid a recirculation.

This patch also generalizes these changes to distributed routers with
gateway ports.

Co-authored-by: Numan Siddique <numans at ovn.org>
Signed-off-by: Mark Gray <mark.d.gray at redhat.com>
Signed-off-by: Numan Siddique <numans at ovn.org>
---

Notes:
    v2:  Addressed Han's comments
         * fixed ovn-northd.8.xml
         * added 'is_gw_router' to all cases where relevant
         * refactor add_router_lb_flow()
         * added ct_commit/ct_dnat to gateway ports case
         * updated flows like "ct.new && ip && <register> to specify ip4/ip6 instead of ip
         * increment ovn_internal_version

 lib/ovn-util.c          |   2 +-
 northd/ovn-northd.8.xml | 285 ++++++++++-------
 northd/ovn-northd.c     | 233 ++++++++------
 northd/ovn_northd.dl    | 136 +++++---
 tests/ovn-northd.at     | 685 +++++++++++++++++++++++++++++++++++-----
 tests/ovn.at            |   8 +-
 tests/system-ovn.at     |  58 +++-
 7 files changed, 1048 insertions(+), 359 deletions(-)

diff --git a/lib/ovn-util.c b/lib/ovn-util.c
index acf4b1cd6059..494d6d42d869 100644
--- a/lib/ovn-util.c
+++ b/lib/ovn-util.c
@@ -760,7 +760,7 @@ ip_address_and_port_from_lb_key(const char *key, char **ip_address,
 
 /* Increment this for any logical flow changes, if an existing OVN action is
  * modified or a stage is added to a logical pipeline. */
-#define OVN_INTERNAL_MINOR_VER 0
+#define OVN_INTERNAL_MINOR_VER 1
 
 /* Returns the OVN version. The caller must free the returned value. */
 char *
diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
index b5c961e891f9..5ab7d31bf8a9 100644
--- a/northd/ovn-northd.8.xml
+++ b/northd/ovn-northd.8.xml
@@ -2637,39 +2637,9 @@ icmp6 {
       </li>
     </ul>
 
-    <h3>Ingress Table 4: DEFRAG</h3>
 
-    <p>
-      This is to send packets to connection tracker for tracking and
-      defragmentation.  It contains a priority-0 flow that simply moves traffic
-      to the next table.
-    </p>
-
-    <p>
-      If load balancing rules with virtual IP addresses (and ports) are
-      configured in <code>OVN_Northbound</code> database for a Gateway router,
-      a priority-100 flow is added for each configured virtual IP address
-      <var>VIP</var>. For IPv4 <var>VIPs</var> the flow matches <code>ip
-      && ip4.dst == <var>VIP</var></code>.  For IPv6 <var>VIPs</var>,
-      the flow matches <code>ip && ip6.dst == <var>VIP</var></code>.
-      The flow uses the action <code>ct_next;</code> to send IP packets to the
-      connection tracker for packet de-fragmentation and tracking before
-      sending it to the next table.
-    </p>
-
-    <p>
-      If ECMP routes with symmetric reply are configured in the
-      <code>OVN_Northbound</code> database for a gateway router, a priority-300
-      flow is added for each router port on which symmetric replies are
-      configured. The matching logic for these ports essentially reverses the
-      configured logic of the ECMP route. So for instance, a route with a
-      destination routing policy will instead match if the source IP address
-      matches the static route's prefix. The flow uses the action
-      <code>ct_next</code> to send IP packets to the connection tracker for
-      packet de-fragmentation and tracking before sending it to the next table.
-    </p>
 
-    <h3>Ingress Table 5: UNSNAT</h3>
+    <h3>Ingress Table 4: UNSNAT</h3>
 
     <p>
       This is for already established connections' reverse traffic.
@@ -2678,7 +2648,7 @@ icmp6 {
       unSNATted here.
     </p>
 
-    <p>Ingress Table 5: UNSNAT on Gateway and Distributed Routers</p>
+    <p>Ingress Table 4: UNSNAT on Gateway and Distributed Routers</p>
     <ul>
       <li>
         <p>
@@ -2705,7 +2675,7 @@ icmp6 {
       </li>
     </ul>
 
-    <p>Ingress Table 5: UNSNAT on Gateway Routers</p>
+    <p>Ingress Table 4: UNSNAT on Gateway Routers</p>
 
     <ul>
       <li>
@@ -2722,9 +2692,10 @@ icmp6 {
           <code>lb_force_snat_ip=router_ip</code> then for every logical router
           port <var>P</var> attached to the Gateway router with the router ip
           <var>B</var>, a priority-110 flow is added with the match
-          <code>inport == <var>P</var> && ip4.dst == <var>B</var></code> or
-          <code>inport == <var>P</var> && ip6.dst == <var>B</var></code>
-          with an action <code>ct_snat; </code>.
+          <code>inport == <var>P</var> &&
+          ip4.dst == <var>B</var></code> or <code>inport == <var>P</var>
+          && ip6.dst == <var>B</var></code> with an action
+          <code>ct_snat; </code>.
         </p>
 
         <p>
@@ -2754,7 +2725,7 @@ icmp6 {
       </li>
     </ul>
 
-    <p>Ingress Table 5: UNSNAT on Distributed Routers</p>
+    <p>Ingress Table 4: UNSNAT on Distributed Routers</p>
 
     <ul>
       <li>
@@ -2785,6 +2756,40 @@ icmp6 {
       </li>
     </ul>
 
+    <h3>Ingress Table 5: DEFRAG</h3>
+
+    <p>
+      This is to send packets to connection tracker for tracking and
+      defragmentation.  It contains a priority-0 flow that simply moves traffic
+      to the next table.
+    </p>
+
+    <p>
+      If load balancing rules with virtual IP addresses (and ports) are
+      configured in <code>OVN_Northbound</code> database for a Gateway router,
+      a priority-100 flow is added for each configured virtual IP address
+      <var>VIP</var>. For IPv4 <var>VIPs</var> the flow matches <code>ip
+      && ip4.dst == <var>VIP</var></code>.  For IPv6 <var>VIPs</var>,
+      the flow matches <code>ip && ip6.dst == <var>VIP</var></code>.
+      The flow applies the action <code>reg0 = <var>VIP</var>
+      && ct_dnat;</code> to send IP packets to the
+      connection tracker for packet de-fragmentation and to dnat the
+      destination IP for the committed connection before sending it to the
+      next table.
+    </p>
+
+    <p>
+      If ECMP routes with symmetric reply are configured in the
+      <code>OVN_Northbound</code> database for a gateway router, a priority-300
+      flow is added for each router port on which symmetric replies are
+      configured. The matching logic for these ports essentially reverses the
+      configured logic of the ECMP route. So for instance, a route with a
+      destination routing policy will instead match if the source IP address
+      matches the static route's prefix. The flow uses the action
+      <code>ct_next</code> to send IP packets to the connection tracker for
+      packet de-fragmentation and tracking before sending it to the next table.
+    </p>
+
     <h3>Ingress Table 6: DNAT</h3>
 
     <p>
@@ -2814,74 +2819,111 @@ icmp6 {
       </li>
 
       <li>
-        For all the configured load balancing rules for a Gateway router or
-        Router with gateway port in <code>OVN_Northbound</code> database that
-        includes a L4 port <var>PORT</var> of protocol <var>P</var> and IPv4
-        or IPv6 address <var>VIP</var>, a priority-120 flow that matches on
-        <code>ct.new && ip && ip4.dst == <var>VIP</var>
-        && <var>P</var> && <var>P</var>.dst == <var>PORT
-        </var></code> (<code>ip6.dst == <var>VIP</var></code> in the IPv6 case)
-        with an action of <code>ct_lb(<var>args</var>)</code>,
-        where <var>args</var> contains comma separated IPv4 or IPv6 addresses
-        (and optional port numbers) to load balance to.  If the router is
-        configured to force SNAT any load-balanced packets, the above action
-        will be replaced by <code>flags.force_snat_for_lb = 1;
-        ct_lb(<var>args</var>);</code>.
-        If the load balancing rule is configured with <code>skip_snat</code>
-        set to true, the above action will be replaced by
-        <code>flags.skip_snat_for_lb = 1; ct_lb(<var>args</var>);</code>.
-        If health check is enabled, then
-        <var>args</var> will only contain those endpoints whose service
-        monitor status entry in <code>OVN_Southbound</code> db is
-        either <code>online</code> or empty.
+        <p>
+          For all the configured load balancing rules for a Gateway router or
+          Router with gateway port in <code>OVN_Northbound</code> database that
+          includes a L4 port <var>PORT</var> of protocol <var>P</var> and IPv4
+          or IPv6 address <var>VIP</var>, a priority-120 flow that matches on
+          <code>ct.new && ip && reg0 == <var>VIP</var>
+          && <var>P</var> && <var>P</var>.dst == <var>PORT
+          </var></code> (<code>xxreg0 == <var>VIP</var></code> in the IPv6 case)
+          with an action of <code>ct_lb(<var>args</var>)</code>,
+          where <var>args</var> contains comma separated IPv4 or IPv6 addresses
+          (and optional port numbers) to load balance to.  If the router is
+          configured to force SNAT any load-balanced packets, the above action
+          will be replaced by <code>flags.force_snat_for_lb = 1;
+          ct_lb(<var>args</var>);</code>.
+          If the load balancing rule is configured with <code>skip_snat</code>
+          set to true, the above action will be replaced by
+          <code>flags.skip_snat_for_lb = 1; ct_lb(<var>args</var>);</code>.
+          If health check is enabled, then
+          <var>args</var> will only contain those endpoints whose service
+          monitor status entry in <code>OVN_Southbound</code> db is
+          either <code>online</code> or empty.
+        </p>
+
+        <p>
+          The previous table <code>lr_in_defrag</code> sets the register
+          <code>reg0</code> (or <code>xxreg0</code> for IPv6) and does
+          <code>ct_dnat</code>.  Hence for established traffic, this
+          table just advances the packet to the next stage.
+        </p>
       </li>
 
       <li>
-        For all the configured load balancing rules for a router in
-        <code>OVN_Northbound</code> database that includes a L4 port
-        <var>PORT</var> of protocol <var>P</var> and IPv4 or IPv6 address
-        <var>VIP</var>, a priority-120 flow that matches on
-        <code>ct.est && ip && ip4.dst == <var>VIP</var>
-        && <var>P</var> && <var>P</var>.dst == <var>PORT
-        </var></code> (<code>ip6.dst == <var>VIP</var></code> in the IPv6 case)
-        with an action of <code>ct_dnat;</code>. If the router is
-        configured to force SNAT any load-balanced packets, the above action
-        will be replaced by <code>flags.force_snat_for_lb = 1; ct_dnat;</code>.
-        If the load balancing rule is configured with <code>skip_snat</code>
-        set to true, the above action will be replaced by
-        <code>flags.skip_snat_for_lb = 1; ct_dnat;</code>.
+        <p>
+          For all the configured load balancing rules for a router in
+          <code>OVN_Northbound</code> database that includes a L4 port
+          <var>PORT</var> of protocol <var>P</var> and IPv4 or IPv6 address
+          <var>VIP</var>, a priority-120 flow that matches on
+          <code>ct.est && ip4 && reg0 == <var>VIP</var>
+          && <var>P</var> && <var>P</var>.dst == <var>PORT
+          </var></code> (<code>ip6</code> and <code>xxreg0 == <var>VIP</var>
+          </code> in the IPv6 case) with an action of <code>next;</code>. If
+          the router is configured to force SNAT any load-balanced packets, the
+          above action will be replaced by <code>flags.force_snat_for_lb = 1;
+          next;</code>. If the load balancing rule is configured with
+          <code>skip_snat</code> set to true, the above action will be replaced
+          by <code>flags.skip_snat_for_lb = 1; next;</code>.
+        </p>
+
+        <p>
+          The previous table <code>lr_in_defrag</code> sets the register
+          <code>reg0</code> (or <code>xxreg0</code> for IPv6) and does
+          <code>ct_dnat</code>.  Hence for established traffic, this
+          table just advances the packet to the next stage.
+        </p>
       </li>
 
       <li>
-        For all the configured load balancing rules for a router in
-        <code>OVN_Northbound</code> database that includes just an IP address
-        <var>VIP</var> to match on, a priority-110 flow that matches on
-        <code>ct.new && ip && ip4.dst ==
-        <var>VIP</var></code> (<code>ip6.dst == <var>VIP</var></code> in the
-        IPv6 case) with an action of
-        <code>ct_lb(<var>args</var>)</code>, where <var>args</var> contains
-        comma separated IPv4 or IPv6 addresses.  If the router is configured
-        to force SNAT any load-balanced packets, the above action will be
-        replaced by <code>flags.force_snat_for_lb = 1;
-        ct_lb(<var>args</var>);</code>.
-        If the load balancing rule is configured with <code>skip_snat</code>
-        set to true, the above action will be replaced by
-        <code>flags.skip_snat_for_lb = 1; ct_lb(<var>args</var>);</code>.
-      </li>
-
-      <li>
-        For all the configured load balancing rules for a router in
-        <code>OVN_Northbound</code> database that includes just an IP address
-        <var>VIP</var> to match on, a priority-110 flow that matches on
-        <code>ct.est && ip && ip4.dst ==
-        <var>VIP</var></code> (or <code>ip6.dst == <var>VIP</var></code>)
-        with an action of <code>ct_dnat;</code>.
-        If the router is configured to force SNAT any load-balanced
-        packets, the above action will be replaced by
-        <code>flags.force_snat_for_lb = 1; ct_dnat;</code>.
-        If the load balancing rule is configured with <code>skip_snat</code>
-        set to true, the above action will be replaced by
-        <code>flags.skip_snat_for_lb = 1; ct_dnat;</code>.
+        <p>
+          For all the configured load balancing rules for a router in
+          <code>OVN_Northbound</code> database that includes just an IP address
+          <var>VIP</var> to match on, a priority-110 flow that matches on
+          <code>ct.new && ip4 && reg0 ==
+          <var>VIP</var></code> (<code>ip6</code> and <code>xxreg0 ==
+          <var>VIP</var></code> in the IPv6 case) with an action of
+          <code>ct_lb(<var>args</var>)</code>, where <var>args</var> contains
+          comma separated IPv4 or IPv6 addresses.  If the router is configured
+          to force SNAT any load-balanced packets, the above action will be
+          replaced by <code>flags.force_snat_for_lb = 1;
+          ct_lb(<var>args</var>);</code>.
+          If the load balancing rule is configured with <code>skip_snat</code>
+          set to true, the above action will be replaced by
+          <code>flags.skip_snat_for_lb = 1; ct_lb(<var>args</var>);</code>.
+        </p>
+
+        <p>
+          The previous table <code>lr_in_defrag</code> sets the register
+          <code>reg0</code> (or <code>xxreg0</code> for IPv6) and does
+          <code>ct_dnat</code>.  Hence for established traffic, this
+          table just advances the packet to the next stage.
+        </p>
+      </li>
+
+
+      <li>
+        <p>
+          For all the configured load balancing rules for a router in
+          <code>OVN_Northbound</code> database that includes just an IP address
+          <var>VIP</var> to match on, a priority-110 flow that matches on
+          <code>ct.est && ip4 && reg0 ==
+          <var>VIP</var></code> (or <code>ip6</code> and
+          <code>xxreg0 == <var>VIP</var></code>) with an action of
+          <code>next;</code>. If the router is configured to force SNAT any
+          load-balanced packets, the above action will be replaced by
+          <code>flags.force_snat_for_lb = 1; next;</code>.
+          If the load balancing rule is configured with <code>skip_snat</code>
+          set to true, the above action will be replaced by
+          <code>flags.skip_snat_for_lb = 1; next;</code>.
+        </p>
+
+        <p>
+          The previous table <code>lr_in_defrag</code> sets the register
+          <code>reg0</code> (or <code>xxreg0</code> for IPv6) and does
+          <code>ct_dnat</code>.  Hence for established traffic, this
+          table just advances the packet to the next stage.
+        </p>
       </li>
 
       <li>
@@ -2930,11 +2972,6 @@ icmp6 {
 
       </li>
 
-      <li>
-        For all IP packets of a Gateway router, a priority-50 flow with an
-        action <code>flags.loopback = 1; ct_dnat;</code>.
-      </li>
-
       <li>
         A priority-0 logical flow with match <code>1</code> has actions
         <code>next;</code>.
@@ -3819,10 +3856,8 @@ nd_ns {
     <p>
       This is for already established connections' reverse traffic.
       i.e., DNAT has already been done in ingress pipeline and now the
-      packet has entered the egress pipeline as part of a reply.  For
-      NAT on a distributed router, it is unDNATted here.  For Gateway
-      routers, the unDNAT processing is carried out in the ingress DNAT
-      table.
+      packet has entered the egress pipeline as part of a reply.  This traffic
+      is unDNATed here.
     </p>
 
     <ul>
@@ -3879,13 +3914,37 @@ nd_ns {
         </p>
       </li>
 
+      <li>
+        For all IP packets, a priority-50 flow with an action
+        <code>flags.loopback = 1; ct_dnat;</code>.
+      </li>
+
       <li>
         A priority-0 logical flow with match <code>1</code> has actions
         <code>next;</code>.
       </li>
     </ul>
 
-    <h3>Egress Table 1: SNAT</h3>
+    <h3>Egress Table 1: Post UNDNAT</h3>
+
+    <p>
+      <ul>
+        <li>
+          A priority-50 logical flow is added that commits any untracked flows
+          from the previous table <code>lr_out_undnat</code>. This flow
+          matches on <code>ct.new && ip</code> with action
+          <code>ct_commit { } ; next; </code>.
+        </li>
+
+        <li>
+          A priority-0 logical flow with match <code>1</code> has actions
+        <code>next;</code>.
+        </li>
+
+      </ul>
+    </p>
+
+    <h3>Egress Table 2: SNAT</h3>
 
     <p>
       Packets that are configured to be SNATed get their source IP address
@@ -3901,7 +3960,7 @@ nd_ns {
       </li>
     </ul>
 
-    <p>Egress Table 1: SNAT on Gateway Routers</p>
+    <p>Egress Table 2: SNAT on Gateway Routers</p>
 
     <ul>
       <li>
@@ -4000,7 +4059,7 @@ nd_ns {
       </li>
     </ul>
 
-    <p>Egress Table 1: SNAT on Distributed Routers</p>
+    <p>Egress Table 2: SNAT on Distributed Routers</p>
 
     <ul>
       <li>
@@ -4060,7 +4119,7 @@ nd_ns {
       </li>
     </ul>
 
-    <h3>Egress Table 2: Egress Loopback</h3>
+    <h3>Egress Table 3: Egress Loopback</h3>
 
     <p>
       For distributed logical routers where one of the logical router
@@ -4129,7 +4188,7 @@ clone {
       </li>
     </ul>
 
-    <h3>Egress Table 3: Delivery</h3>
+    <h3>Egress Table 4: Delivery</h3>
 
     <p>
       Packets that reach this table are ready for delivery.  It contains:
diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
index 5e5e0ce6d39f..6a7c0a874bfe 100644
--- a/northd/ovn-northd.c
+++ b/northd/ovn-northd.c
@@ -188,8 +188,8 @@ enum ovn_stage {
     PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
     PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
     PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
-    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          4, "lr_in_defrag")       \
-    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          5, "lr_in_unsnat")       \
+    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
+    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
     PIPELINE_STAGE(ROUTER, IN,  DNAT,            6, "lr_in_dnat")         \
     PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7, "lr_in_ecmp_stateful") \
     PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8, "lr_in_nd_ra_options") \
@@ -205,10 +205,11 @@ enum ovn_stage {
     PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     18, "lr_in_arp_request")  \
                                                                       \
     /* Logical router egress stages. */                               \
-    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,    0, "lr_out_undnat")        \
-    PIPELINE_STAGE(ROUTER, OUT, SNAT,      1, "lr_out_snat")          \
-    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,  2, "lr_out_egr_loop")      \
-    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,  3, "lr_out_delivery")
+    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,      0, "lr_out_undnat")        \
+    PIPELINE_STAGE(ROUTER, OUT, POST_UNDNAT, 1, "lr_out_post_undnat")   \
+    PIPELINE_STAGE(ROUTER, OUT, SNAT,        2, "lr_out_snat")          \
+    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,    3, "lr_out_egr_loop")      \
+    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,    4, "lr_out_delivery")
 
 #define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
     S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
@@ -644,6 +645,12 @@ struct ovn_datapath {
     /* Multicast data. */
     struct mcast_info mcast_info;
 
+    /* Applies to only logical router datapath.
+     * True if logical router is a gateway router. i.e options:chassis is set.
+     * If this is true, then 'l3dgw_port' and 'l3redirect_port' will be
+     * ignored. */
+    bool is_gw_router;
+
     /* OVN northd only needs to know about the logical router gateway port for
      * NAT on a distributed router.  This "distributed gateway port" is
      * populated only when there is a gateway chassis specified for one of
@@ -1248,6 +1255,9 @@ join_datapaths(struct northd_context *ctx, struct hmap *datapaths,
         init_mcast_info_for_datapath(od);
         init_nat_entries(od);
         init_lb_ips(od);
+        if (smap_get(&od->nbr->options, "chassis")) {
+            od->is_gw_router = true;
+        }
         ovs_list_push_back(lr_list, &od->lr_list);
     }
 }
@@ -8732,43 +8742,81 @@ enum lb_snat_type {
 
 static void
 add_router_lb_flow(struct hmap *lflows, struct ovn_datapath *od,
-                   struct ds *match, struct ds *actions, int priority,
+                   struct ds *match, struct ds *actions,
                    enum lb_snat_type snat_type, struct ovn_lb_vip *lb_vip,
                    const char *proto, struct nbrec_load_balancer *lb,
                    struct shash *meter_groups, struct sset *nat_entries)
 {
+    int priority = 110;
+
     build_empty_lb_event_flow(od, lflows, lb_vip, lb, S_ROUTER_IN_DNAT,
                               meter_groups);
 
+    /* Higher priority rules are added for load-balancing in DNAT
+     * table.  For every match (on a VIP[:port]), we add two flows. One flow is
+     * for specific matching on ct.new with an action of "ct_lb($targets);".
+     * The other flow is for ct.est with an action of "next;". */
+    ds_clear(match);
+    if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
+        ds_put_format(match, "ip4 && reg0 == %s",
+                        lb_vip->vip_str);
+    } else {
+        ds_put_format(match, "ip6 && xxreg0 == %s",
+                        lb_vip->vip_str);
+    }
+
     /* A match and actions for new connections. */
-    char *new_match = xasprintf("ct.new && %s", ds_cstr(match));
+    struct ds new_match = DS_EMPTY_INITIALIZER;
+    ds_put_format(&new_match, "ct.new && %s", ds_cstr(match));
+    if (lb_vip->vip_port) {
+        ds_put_format(&new_match, " && %s && %s.dst == %d", proto,
+                        proto, lb_vip->vip_port);
+        priority = 120;
+    }
+    if (od->l3redirect_port &&
+        (lb_vip->n_backends || !lb_vip->empty_backend_rej)) {
+        ds_put_format(&new_match, " && is_chassis_resident(%s)",
+                        od->l3redirect_port->json_key);
+    }
     if (snat_type == FORCE_SNAT || snat_type == SKIP_SNAT) {
         char *new_actions = xasprintf("flags.%s_snat_for_lb = 1; %s",
                 snat_type == SKIP_SNAT ? "skip" : "force",
                 ds_cstr(actions));
         ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
-                                new_match, new_actions, &lb->header_);
+                                ds_cstr(&new_match), new_actions, &lb->header_);
         free(new_actions);
     } else {
         ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
-                                new_match, ds_cstr(actions), &lb->header_);
+                                ds_cstr(&new_match), ds_cstr(actions),
+                                &lb->header_);
     }
 
     /* A match and actions for established connections. */
-    char *est_match = xasprintf("ct.est && %s", ds_cstr(match));
+    struct ds est_match = DS_EMPTY_INITIALIZER;
+    ds_put_format(&est_match, "ct.est && %s && ct_label.natted == 1",
+                                ds_cstr(match));
+    if (lb_vip->vip_port) {
+        ds_put_format(&est_match, " && %s", proto);
+    }
+    if (od->l3redirect_port &&
+        (lb_vip->n_backends || !lb_vip->empty_backend_rej)) {
+        ds_put_format(&est_match, " && is_chassis_resident(%s)",
+                        od->l3redirect_port->json_key);
+    }
     if (snat_type == FORCE_SNAT || snat_type == SKIP_SNAT) {
-        char *est_actions = xasprintf("flags.%s_snat_for_lb = 1; ct_dnat;",
+        char *est_actions = xasprintf("flags.%s_snat_for_lb = 1; next;",
                 snat_type == SKIP_SNAT ? "skip" : "force");
         ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
-                                est_match, est_actions, &lb->header_);
+                                ds_cstr(&est_match), est_actions,
+                                &lb->header_);
         free(est_actions);
     } else {
         ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DNAT, priority,
-                                est_match, "ct_dnat;", &lb->header_);
+                                ds_cstr(&est_match), "next;", &lb->header_);
     }
 
-    free(new_match);
-    free(est_match);
+    ds_destroy(&new_match);
+    ds_destroy(&est_match);
 
     const char *ip_match = NULL;
     if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
@@ -8853,8 +8901,8 @@ add_router_lb_flow(struct hmap *lflows, struct ovn_datapath *od,
 static void
 build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
                        struct hmap *lbs, struct shash *meter_groups,
-                       struct sset *nat_entries, struct ds *match,
-                       struct ds *actions)
+                       struct sset *nat_entries,
+                       struct ds *match, struct ds *actions)
 {
     /* A set to hold all ips that need defragmentation and tracking. */
     struct sset all_ips = SSET_INITIALIZER(&all_ips);
@@ -8876,10 +8924,13 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
         for (size_t j = 0; j < lb->n_vips; j++) {
             struct ovn_lb_vip *lb_vip = &lb->vips[j];
             struct ovn_northd_lb_vip *lb_vip_nb = &lb->vips_nb[j];
-            ds_clear(actions);
-            build_lb_vip_actions(lb_vip, lb_vip_nb, actions,
-                                 lb->selection_fields, false);
 
+            bool is_udp = nullable_string_is_equal(nb_lb->protocol, "udp");
+            bool is_sctp = nullable_string_is_equal(nb_lb->protocol,
+                                                    "sctp");
+            const char *proto = is_udp ? "udp" : is_sctp ? "sctp" : "tcp";
+
+            struct ds defrag_actions = DS_EMPTY_INITIALIZER;
             if (!sset_contains(&all_ips, lb_vip->vip_str)) {
                 sset_add(&all_ips, lb_vip->vip_str);
                 /* If there are any load balancing rules, we should send
@@ -8891,49 +8942,28 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
                  * 2. If there are L4 ports in load balancing rules, we
                  *    need the defragmentation to match on L4 ports. */
                 ds_clear(match);
+                ds_clear(&defrag_actions);
                 if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
                     ds_put_format(match, "ip && ip4.dst == %s",
                                   lb_vip->vip_str);
+                    ds_put_format(&defrag_actions, "reg0 = %s; ct_dnat;",
+                                  lb_vip->vip_str);
                 } else {
                     ds_put_format(match, "ip && ip6.dst == %s",
                                   lb_vip->vip_str);
+                    ds_put_format(&defrag_actions, "xxreg0 = %s; ct_dnat;",
+                                  lb_vip->vip_str);
+                }
+
+                if (lb_vip->vip_port) {
+                    ds_put_format(match, " && %s", proto);
                 }
                 ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DEFRAG,
-                                        100, ds_cstr(match), "ct_next;",
+                                        100, ds_cstr(match),
+                                        ds_cstr(&defrag_actions),
                                         &nb_lb->header_);
             }
-
-            /* Higher priority rules are added for load-balancing in DNAT
-             * table.  For every match (on a VIP[:port]), we add two flows
-             * via add_router_lb_flow().  One flow is for specific matching
-             * on ct.new with an action of "ct_lb($targets);".  The other
-             * flow is for ct.est with an action of "ct_dnat;". */
-            ds_clear(match);
-            if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
-                ds_put_format(match, "ip && ip4.dst == %s",
-                              lb_vip->vip_str);
-            } else {
-                ds_put_format(match, "ip && ip6.dst == %s",
-                              lb_vip->vip_str);
-            }
-
-            int prio = 110;
-            bool is_udp = nullable_string_is_equal(nb_lb->protocol, "udp");
-            bool is_sctp = nullable_string_is_equal(nb_lb->protocol,
-                                                    "sctp");
-            const char *proto = is_udp ? "udp" : is_sctp ? "sctp" : "tcp";
-
-            if (lb_vip->vip_port) {
-                ds_put_format(match, " && %s && %s.dst == %d", proto,
-                              proto, lb_vip->vip_port);
-                prio = 120;
-            }
-
-            if (od->l3redirect_port &&
-                (lb_vip->n_backends || !lb_vip->empty_backend_rej)) {
-                ds_put_format(match, " && is_chassis_resident(%s)",
-                              od->l3redirect_port->json_key);
-            }
+            ds_destroy(&defrag_actions);
 
             enum lb_snat_type snat_type = NO_FORCE_SNAT;
             if (lb_skip_snat) {
@@ -8941,9 +8971,11 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
             } else if (lb_force_snat_ip || od->lb_force_snat_router_ip) {
                 snat_type = FORCE_SNAT;
             }
-            add_router_lb_flow(lflows, od, match, actions, prio,
-                               snat_type, lb_vip, proto, nb_lb,
-                               meter_groups, nat_entries);
+            ds_clear(actions);
+            build_lb_vip_actions(lb_vip, lb_vip_nb, actions,
+                                 lb->selection_fields, false);
+            add_router_lb_flow(lflows, od, match, actions, snat_type, lb_vip,
+                               proto, nb_lb, meter_groups, nat_entries);
         }
     }
     sset_destroy(&all_ips);
@@ -9068,7 +9100,6 @@ lrouter_nat_add_ext_ip_match(struct ovn_datapath *od,
 {
     struct nbrec_address_set *allowed_ext_ips = nat->allowed_ext_ips;
     struct nbrec_address_set *exempted_ext_ips = nat->exempted_ext_ips;
-    bool is_gw_router = !od->l3dgw_port;
 
     ovs_assert(allowed_ext_ips || exempted_ext_ips);
 
@@ -9103,7 +9134,7 @@ lrouter_nat_add_ext_ip_match(struct ovn_datapath *od,
             /* S_ROUTER_OUT_SNAT uses priority (mask + 1 + 128 + 1) */
             priority = count_1bits(ntohl(mask)) + 2;
 
-            if (!is_gw_router) {
+            if (!od->is_gw_router) {
                 priority += 128;
            }
         }
@@ -10854,8 +10885,7 @@ build_ipv6_input_flows_for_lrouter_port(
         }
 
         /* UDP/TCP/SCTP port unreachable */
-        if (!smap_get(&op->od->nbr->options, "chassis")
-            && !op->od->l3dgw_port) {
+        if (!op->od->is_gw_router && !op->od->l3dgw_port) {
             for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
                 ds_clear(match);
                 ds_put_format(match,
@@ -11140,8 +11170,7 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
                                   match, false, 90, NULL, lflows);
         }
 
-        if (!smap_get(&op->od->nbr->options, "chassis")
-            && !op->od->l3dgw_port) {
+        if (!op->od->is_gw_router && !op->od->l3dgw_port) {
             /* UDP/TCP/SCTP port unreachable. */
             for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
                 ds_clear(match);
@@ -11279,8 +11308,7 @@ build_lrouter_in_unsnat_flow(struct hmap *lflows, struct ovn_datapath *od,
     }
 
     bool stateless = lrouter_nat_is_stateless(nat);
-    if (!od->l3dgw_port) {
-        /* Gateway router. */
+    if (od->is_gw_router) {
         ds_clear(match);
         ds_clear(actions);
         ds_put_format(match, "ip && ip%s.dst == %s",
@@ -11336,11 +11364,10 @@ build_lrouter_in_dnat_flow(struct hmap *lflows, struct ovn_datapath *od,
     if (!strcmp(nat->type, "dnat") || !strcmp(nat->type, "dnat_and_snat")) {
         bool stateless = lrouter_nat_is_stateless(nat);
 
-        if (!od->l3dgw_port) {
-            /* Gateway router. */
+        if (od->is_gw_router) {
             /* Packet when it goes from the initiator to destination.
-            * We need to set flags.loopback because the router can
-            * send the packet back through the same interface. */
+             * We need to set flags.loopback because the router can
+             * send the packet back through the same interface. */
             ds_clear(match);
             ds_put_format(match, "ip && ip%s.dst == %s",
                           is_v6 ? "6" : "4", nat->external_ip);
@@ -11352,8 +11379,8 @@ build_lrouter_in_dnat_flow(struct hmap *lflows, struct ovn_datapath *od,
 
             if (!lport_addresses_is_empty(&od->dnat_force_snat_addrs)) {
                 /* Indicate to the future tables that a DNAT has taken
-                * place and a force SNAT needs to be done in the
-                * Egress SNAT table. */
+                 * place and a force SNAT needs to be done in the
+                 * Egress SNAT table. */
                 ds_put_format(actions, "flags.force_snat_for_dnat = 1; ");
             }
 
@@ -11424,8 +11451,7 @@ build_lrouter_out_undnat_flow(struct hmap *lflows, struct ovn_datapath *od,
     * part of a reply. We undo the DNAT here.
     *
     * Note that this only applies for NAT on a distributed router.
-    * Undo DNAT on a gateway router is done in the ingress DNAT
-    * pipeline stage. */
+    */
     if (!od->l3dgw_port ||
         (strcmp(nat->type, "dnat") && strcmp(nat->type, "dnat_and_snat"))) {
         return;
@@ -11475,8 +11501,7 @@ build_lrouter_out_snat_flow(struct hmap *lflows, struct ovn_datapath *od,
     }
 
     bool stateless = lrouter_nat_is_stateless(nat);
-    if (!od->l3dgw_port) {
-        /* Gateway router. */
+    if (od->is_gw_router) {
         ds_clear(match);
         ds_put_format(match, "ip && ip%s.src == %s",
                       is_v6 ? "6" : "4", nat->logical_ip);
@@ -11582,15 +11607,15 @@ build_lrouter_ingress_flow(struct hmap *lflows, struct ovn_datapath *od,
         */
         ds_clear(actions);
         ds_put_format(actions, REG_INPORT_ETH_ADDR " = %s; next;",
-                    od->l3dgw_port->lrp_networks.ea_s);
+                      od->l3dgw_port->lrp_networks.ea_s);
 
         ds_clear(match);
         ds_put_format(match,
-                    "eth.dst == "ETH_ADDR_FMT" && inport == %s"
-                    " && is_chassis_resident(\"%s\")",
-                    ETH_ADDR_ARGS(mac),
-                    od->l3dgw_port->json_key,
-                    nat->logical_port);
+                      "eth.dst == "ETH_ADDR_FMT" && inport == %s"
+                      " && is_chassis_resident(\"%s\")",
+                      ETH_ADDR_ARGS(mac),
+                      od->l3dgw_port->json_key,
+                      nat->logical_port);
         ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_ADMISSION, 50,
                                 ds_cstr(match), ds_cstr(actions),
                                 &nat->header_);
@@ -11705,9 +11730,27 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
     ovn_lflow_add(lflows, od, S_ROUTER_OUT_SNAT, 0, "1", "next;");
     ovn_lflow_add(lflows, od, S_ROUTER_IN_DNAT, 0, "1", "next;");
     ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 0, "1", "next;");
+    ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 0, "1", "next;");
     ovn_lflow_add(lflows, od, S_ROUTER_OUT_EGR_LOOP, 0, "1", "next;");
     ovn_lflow_add(lflows, od, S_ROUTER_IN_ECMP_STATEFUL, 0, "1", "next;");
 
+    /* If the router has load balancer or DNAT rules, re-circulate every packet
+     * through the DNAT zone so that packets that need to be unDNATed in the
+     * reverse direction get unDNATed.
+     *
+     * We also commit newly initiated connections in the reply direction to the
+     * DNAT zone. This ensures that these flows are tracked. If the flow was
+     * not committed, it would produce ongoing datapath flows with the ct.new
+     * flag set. Some NICs are unable to offload these flows.
+     */
+    if ((od->is_gw_router || od->l3dgw_port) &&
+        (od->nbr->n_nat || od->nbr->n_load_balancer)) {
+        ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 50,
+                      "ip", "flags.loopback = 1; ct_dnat;");
+        ovn_lflow_add(lflows, od, S_ROUTER_OUT_POST_UNDNAT, 50,
+                      "ip && ct.new", "ct_commit { } ; next; ");
+    }
+
     /* Send the IPv6 NS packets to next table. When ovn-controller
      * generates IPv6 NS (for the action - nd_ns{}), the injected
      * packet would go through conntrack - which is not required. */
@@ -11716,7 +11759,7 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
     /* NAT rules are only valid on Gateway routers and routers with
      * l3dgw_port (router has a port with gateway chassis
      * specified). */
-    if (!smap_get(&od->nbr->options, "chassis") && !od->l3dgw_port) {
+    if (!od->is_gw_router && !od->l3dgw_port) {
         return;
     }
 
@@ -11747,7 +11790,12 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
                                    mask, is_v6);
 
         /* ARP resolve for NAT IPs. */
-        if (od->l3dgw_port) {
+        if (od->is_gw_router) {
+            /* Add the NAT external_ip to the nat_entries for
+             * gateway routers. This is required for adding load balancer
+             * flows.*/
+            sset_add(&nat_entries, nat->external_ip);
+        } else {
             if (!sset_contains(&nat_entries, nat->external_ip)) {
                 ds_clear(match);
                 ds_put_format(
@@ -11767,11 +11815,6 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
                                         &nat->header_);
                 sset_add(&nat_entries, nat->external_ip);
             }
-        } else {
-            /* Add the NAT external_ip to the nat_entries even for
-             * gateway routers. This is required for adding load balancer
-             * flows.*/
-            sset_add(&nat_entries, nat->external_ip);
         }
 
         /* S_ROUTER_OUT_UNDNAT */
@@ -11850,7 +11893,7 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
     }
 
     /* Handle force SNAT options set in the gateway router. */
-    if (!od->l3dgw_port) {
+    if (od->is_gw_router) {
         if (dnat_force_snat_ip) {
             if (od->dnat_force_snat_addrs.n_ipv4_addrs) {
                 build_lrouter_force_snat_flows(lflows, od, "4",
@@ -11873,18 +11916,6 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
                     od->lb_force_snat_addrs.ipv6_addrs[0].addr_s, "lb");
             }
         }
-
-        /* For gateway router, re-circulate every packet through
-         * the DNAT zone.  This helps with the following.
-         *
-         * Any packet that needs to be unDNATed in the reverse
-         * direction gets unDNATed. Ideally this could be done in
-         * the egress pipeline. But since the gateway router
-         * does not have any feature that depends on the source
-         * ip address being external IP address for IP routing,
-         * we can do it here, saving a future re-circulation. */
-        ovn_lflow_add(lflows, od, S_ROUTER_IN_DNAT, 50,
-                      "ip", "flags.loopback = 1; ct_dnat;");
     }
 
     /* Load balancing and packet defrag are only valid on
diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl
index 80ec68bbd766..45bf6250d534 100644
--- a/northd/ovn_northd.dl
+++ b/northd/ovn_northd.dl
@@ -1461,8 +1461,8 @@ function s_ROUTER_IN_ADMISSION():       Stage { Stage{Ingress,  0, "lr_in_admiss
 function s_ROUTER_IN_LOOKUP_NEIGHBOR(): Stage { Stage{Ingress,  1, "lr_in_lookup_neighbor"} }
 function s_ROUTER_IN_LEARN_NEIGHBOR():  Stage { Stage{Ingress,  2, "lr_in_learn_neighbor"} }
 function s_ROUTER_IN_IP_INPUT():        Stage { Stage{Ingress,  3, "lr_in_ip_input"} }
-function s_ROUTER_IN_DEFRAG():          Stage { Stage{Ingress,  4, "lr_in_defrag"} }
-function s_ROUTER_IN_UNSNAT():          Stage { Stage{Ingress,  5, "lr_in_unsnat"} }
+function s_ROUTER_IN_UNSNAT():          Stage { Stage{Ingress,  4, "lr_in_unsnat"} }
+function s_ROUTER_IN_DEFRAG():          Stage { Stage{Ingress,  5, "lr_in_defrag"} }
 function s_ROUTER_IN_DNAT():            Stage { Stage{Ingress,  6, "lr_in_dnat"} }
 function s_ROUTER_IN_ECMP_STATEFUL():   Stage { Stage{Ingress,  7, "lr_in_ecmp_stateful"} }
 function s_ROUTER_IN_ND_RA_OPTIONS():   Stage { Stage{Ingress,  8, "lr_in_nd_ra_options"} }
@@ -1479,9 +1479,10 @@ function s_ROUTER_IN_ARP_REQUEST():     Stage { Stage{Ingress, 18, "lr_in_arp_re
 
 /* Logical router egress stages. */
 function s_ROUTER_OUT_UNDNAT():         Stage { Stage{ Egress,  0, "lr_out_undnat"} }
-function s_ROUTER_OUT_SNAT():           Stage { Stage{ Egress,  1, "lr_out_snat"} }
-function s_ROUTER_OUT_EGR_LOOP():       Stage { Stage{ Egress,  2, "lr_out_egr_loop"} }
-function s_ROUTER_OUT_DELIVERY():       Stage { Stage{ Egress,  3, "lr_out_delivery"} }
+function s_ROUTER_OUT_POST_UNDNAT():    Stage { Stage{ Egress,  1, "lr_out_post_undnat"} }
+function s_ROUTER_OUT_SNAT():           Stage { Stage{ Egress,  2, "lr_out_snat"} }
+function s_ROUTER_OUT_EGR_LOOP():       Stage { Stage{ Egress,  3, "lr_out_egr_loop"} }
+function s_ROUTER_OUT_DELIVERY():       Stage { Stage{ Egress,  4, "lr_out_delivery"} }
 
 /*
  * OVS register usage:
@@ -2883,7 +2884,8 @@ for (&Switch(._uuid = ls_uuid)) {
 function get_match_for_lb_key(ip_address: v46_ip,
                               port: bit<16>,
                               protocol: Option<string>,
-                              redundancy: bool): string = {
+                              redundancy: bool,
+                              use_nexthop_reg: bool): string = {
     var port_match = if (port != 0) {
         var proto = if (protocol == Some{"udp"}) {
             "udp"
@@ -2897,11 +2899,26 @@ function get_match_for_lb_key(ip_address: v46_ip,
     };
 
     var ip_match = match (ip_address) {
-        IPv4{ipv4} -> "ip4.dst == ${ipv4}",
-        IPv6{ipv6} -> "ip6.dst == ${ipv6}"
+        IPv4{ipv4} ->
+            if (use_nexthop_reg) {
+                "${rEG_NEXT_HOP()} == ${ipv4}"
+            } else {
+                "ip4.dst == ${ipv4}"
+            },
+        IPv6{ipv6} ->
+            if (use_nexthop_reg) {
+                "xx${rEG_NEXT_HOP()} == ${ipv6}"
+            } else {
+                "ip6.dst == ${ipv6}"
+            }
     };
 
-    if (redundancy) { "ip && " } else { "" } ++ ip_match ++ port_match
+    var ipx = match (ip_address) {
+        IPv4{ipv4} -> "ip4",
+        IPv6{ipv6} -> "ip6",
+    };
+
+    if (redundancy) { ipx ++ " && " } else { "" } ++ ip_match ++ port_match
 }
 /* New connections in Ingress table. */
 
@@ -2932,7 +2949,11 @@ function build_lb_vip_actions(lbvip: Intern<LBVIPWithStatus>,
     for (pair in lbvip.backends) {
         (var backend, var up) = pair;
         if (up) {
-            up_backends.insert("${backend.ip.to_bracketed_string()}:${backend.port}")
+            if (backend.port != 0) {
+                up_backends.insert("${backend.ip.to_bracketed_string()}:${backend.port}")
+            } else {
+                up_backends.insert("${backend.ip.to_bracketed_string()}")
+            }
         }
     };
 
@@ -2978,7 +2999,7 @@ Flow(.logical_datapath = sw._uuid,
 
         build_lb_vip_actions(lbvip, s_SWITCH_OUT_QOS_MARK(), actions0 ++ actions1)
     },
-    var __match = "ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, false).
+    var __match = "ct.new && " ++ get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, false, false).
 
 /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
  * Packets that don't need hairpinning should continue processing.
@@ -3016,7 +3037,7 @@ for (&Switch(._uuid = ls_uuid, .has_lb_vip = true)) {
          .__match = "ip && ct.new && ct.trk && ${rEGBIT_HAIRPIN()} == 1",
          .actions = "ct_snat_to_vip; next;",
          .external_ids = stage_hint(ls_uuid));
-     
+
     /* If packet needs to be hairpinned, for established sessions there
      * should already be an SNAT conntrack entry.
      */
@@ -5415,13 +5436,14 @@ function default_allow_flow(datapath: uuid, stage: Stage): Flow {
          .actions          = "next;",
          .external_ids     = map_empty()}
 }
-for (&Router(._uuid = lr_uuid)) {
+for (r in &Router(._uuid = lr_uuid)) {
     /* Packets are allowed by default. */
     Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DEFRAG())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_UNSNAT())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_SNAT())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_DNAT())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_UNDNAT())];
+    Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_POST_UNDNAT())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_OUT_EGR_LOOP())];
     Flow[default_allow_flow(lr_uuid, s_ROUTER_IN_ECMP_STATEFUL())];
 
@@ -5436,6 +5458,36 @@ for (&Router(._uuid = lr_uuid)) {
          .external_ids     = map_empty())
 }
 
+for (r in &Router(._uuid = lr_uuid,
+                  .l3dgw_port = l3dgw_port,
+                  .is_gateway = is_gateway,
+                  .nat = nat,
+                  .load_balancer = load_balancer)
+     if (l3dgw_port.is_some() or is_gateway) and (not is_empty(nat) or not is_empty(load_balancer))) {
+    /* If the router has load balancer or DNAT rules, re-circulate every packet
+     * through the DNAT zone so that packets that need to be unDNATed in the
+     * reverse direction get unDNATed.
+     *
+     * We also commit newly initiated connections in the reply direction to the
+     * DNAT zone. This ensures that these flows are tracked. If the flow was
+     * not committed, it would produce ongoing datapath flows with the ct.new
+     * flag set. Some NICs are unable to offload these flows.
+     */
+    Flow(.logical_datapath = lr_uuid,
+        .stage            = s_ROUTER_OUT_POST_UNDNAT(),
+        .priority         = 50,
+        .__match          = "ip && ct.new",
+        .actions          = "ct_commit { } ; next; ",
+        .external_ids     = map_empty());
+
+    Flow(.logical_datapath = lr_uuid,
+        .stage            = s_ROUTER_OUT_UNDNAT(),
+        .priority         = 50,
+        .__match          = "ip",
+        .actions          = "flags.loopback = 1; ct_dnat;",
+        .external_ids     = map_empty())
+}
+
 Flow(.logical_datapath = lr,
      .stage            = s_ROUTER_OUT_SNAT(),
      .priority         = 120,
@@ -5474,7 +5526,7 @@ function lrouter_nat_add_ext_ip_match(
         Some{AllowedExtIps{__as}} -> (" && ${ipX}.${dir} == $${__as.name}", None),
         Some{ExemptedExtIps{__as}} -> {
             /* Priority of logical flows corresponding to exempted_ext_ips is
-             * +1 of the corresponding regulr NAT rule.
+             * +1 of the corresponding regular NAT rule.
              * For example, if we have following NAT rule and we associate
              * exempted external ips to it:
              * "ovn-nbctl lr-nat-add router dnat_and_snat 10.15.24.139 50.0.0.11"
@@ -5782,8 +5834,7 @@ for (r in &Router(._uuid = lr_uuid,
              * part of a reply. We undo the DNAT here.
              *
              * Note that this only applies for NAT on a distributed router.
-             * Undo DNAT on a gateway router is done in the ingress DNAT
-             * pipeline stage. */
+             */
             if ((nat.nat.__type == "dnat" or nat.nat.__type == "dnat_and_snat")) {
                 Some{var gwport} = l3dgw_port in
                 var __match =
@@ -5987,23 +6038,7 @@ for (r in &Router(._uuid = lr_uuid,
         if (not lb_force_snat_ips.is_empty())
         LogicalRouterForceSnatFlows(.logical_router = lr_uuid,
                                     .ips = lb_force_snat_ips,
-                                    .context = "lb");
-
-       /* For gateway router, re-circulate every packet through
-        * the DNAT zone.  This helps with the following.
-        *
-        * Any packet that needs to be unDNATed in the reverse
-        * direction gets unDNATed. Ideally this could be done in
-        * the egress pipeline. But since the gateway router
-        * does not have any feature that depends on the source
-        * ip address being external IP address for IP routing,
-        * we can do it here, saving a future re-circulation. */
-        Flow(.logical_datapath = lr_uuid,
-             .stage            = s_ROUTER_IN_DNAT(),
-             .priority         = 50,
-             .__match          = "ip",
-             .actions          = "flags.loopback = 1; ct_dnat;",
-             .external_ids     = map_empty())
+                                    .context = "lb")
     }
 }
 
@@ -6061,7 +6096,16 @@ for (RouterLBVIP(
          *    pick a DNAT ip address from a group.
          * 2. If there are L4 ports in load balancing rules, we
          *    need the defragmentation to match on L4 ports. */
-        var __match = "ip && ${ipX}.dst == ${ip_address}" in
+        var match1 = "ip && ${ipX}.dst == ${ip_address}" in
+        var match2 =
+            if (port != 0) {
+                " && ${proto}"
+            } else {
+                ""
+            } in
+        var __match = match1 ++ match2 in
+        var xx = ip_address.xxreg() in
+        var __actions = "${xx}${rEG_NEXT_HOP()} = ${ip_address}; ct_dnat;" in
         /* One of these flows must be created for each unique LB VIP address.
          * We create one for each VIP:port pair; flows with the same IP and
          * different port numbers will produce identical flows that will
@@ -6070,7 +6114,7 @@ for (RouterLBVIP(
              .stage            = s_ROUTER_IN_DEFRAG(),
              .priority         = 100,
              .__match          = __match,
-             .actions          = "ct_next;",
+             .actions          = __actions,
              .external_ids     = stage_hint(lb._uuid));
 
         /* Higher priority rules are added for load-balancing in DNAT
@@ -6078,7 +6122,8 @@ for (RouterLBVIP(
          * via add_router_lb_flow().  One flow is for specific matching
          * on ct.new with an action of "ct_lb($targets);".  The other
          * flow is for ct.est with an action of "ct_dnat;". */
-        var match1 = "ip && ${ipX}.dst == ${ip_address}" in
+        var xx = ip_address.xxreg() in
+        var match1 = "${ipX} && ${xx}${rEG_NEXT_HOP()} == ${ip_address}" in
         (var prio, var match2) =
             if (port != 0) {
                 (120, " && ${proto} && ${proto}.dst == ${port}")
@@ -6093,12 +6138,21 @@ for (RouterLBVIP(
         var snat_for_lb = snat_for_lb(r.options, lb) in
         {
             /* A match and actions for established connections. */
-            var est_match = "ct.est && " ++ __match in
+            var est_match = "ct.est && " ++ match1 ++ " && ct_label.natted == 1" ++
+                if (port != 0) {
+                    " && ${proto}"
+                } else {
+                    ""
+                } ++
+                match ((l3dgw_port, backends != "" or lb.options.get_bool_def("reject", false))) {
+                    (Some{gwport}, true) -> " && is_chassis_resident(${redirect_port_name})",
+                    _ -> ""
+                } in
             var actions =
                 match (snat_for_lb) {
-                    SkipSNAT -> "flags.skip_snat_for_lb = 1; ct_dnat;",
-                    ForceSNAT -> "flags.force_snat_for_lb = 1; ct_dnat;",
-                    _ -> "ct_dnat;"
+                    SkipSNAT -> "flags.skip_snat_for_lb = 1; next;",
+                    ForceSNAT -> "flags.force_snat_for_lb = 1; next;",
+                    _ -> "next;"
                 } in
             Flow(.logical_datapath = lr_uuid,
                  .stage            = s_ROUTER_IN_DNAT(),
@@ -6189,7 +6243,7 @@ Flow(.logical_datapath = r._uuid,
     r.load_balancer.contains(lb._uuid),
     var __match
         = "ct.new && " ++
-          get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, true) ++
+          get_match_for_lb_key(lbvip.vip_addr, lbvip.vip_port, lb.protocol, true, true) ++
           match (r.l3dgw_port) {
               Some{gwport} -> " && is_chassis_resident(${r.redirect_port_name})",
               _ -> ""
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 2c811f0941f8..7003e3c89498 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -868,25 +868,25 @@ check_flow_match_sets() {
 echo
 echo "IPv4: stateful"
 ovn-nbctl --wait=sb lr-nat-add R1 dnat_and_snat  172.16.1.1 50.0.0.11
-check_flow_match_sets 2 2 2 0 0 0 0
+check_flow_match_sets 2 2 3 0 0 0 0
 ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
 
 echo
 echo "IPv4: stateless"
 ovn-nbctl --wait=sb --stateless lr-nat-add R1 dnat_and_snat  172.16.1.1 50.0.0.11
-check_flow_match_sets 2 0 0 2 2 0 0
+check_flow_match_sets 2 0 1 2 2 0 0
 ovn-nbctl lr-nat-del R1 dnat_and_snat  172.16.1.1
 
 echo
 echo "IPv6: stateful"
 ovn-nbctl --wait=sb lr-nat-add R1 dnat_and_snat fd01::1 fd11::2
-check_flow_match_sets 2 2 2 0 0 0 0
+check_flow_match_sets 2 2 3 0 0 0 0
 ovn-nbctl lr-nat-del R1 dnat_and_snat  fd01::1
 
 echo
 echo "IPv6: stateless"
 ovn-nbctl --wait=sb --stateless lr-nat-add R1 dnat_and_snat fd01::1 fd11::2
-check_flow_match_sets 2 0 0 0 0 2 2
+check_flow_match_sets 2 0 1 0 0 2 2
 
 AT_CLEANUP
 ])
@@ -1406,40 +1406,39 @@ AT_SETUP([ovn -- Load balancer VIP in NAT entries])
 AT_SKIP_IF([test $HAVE_PYTHON = no])
 ovn_start
 
-ovn-nbctl lr-add lr0
-ovn-nbctl lrp-add lr0 lr0-public 00:00:01:01:02:04 192.168.2.1/24
-ovn-nbctl lrp-add lr0 lr0-join 00:00:01:01:02:04 10.10.0.1/24
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lr0-public 00:00:01:01:02:04 192.168.2.1/24
+check ovn-nbctl lrp-add lr0 lr0-join 00:00:01:01:02:04 10.10.0.1/24
 
-ovn-nbctl set logical_router lr0 options:chassis=ch1
+check ovn-nbctl set logical_router lr0 options:chassis=ch1
 
-ovn-nbctl lb-add lb1 "192.168.2.1:8080" "10.0.0.4:8080"
-ovn-nbctl lb-add lb2 "192.168.2.4:8080" "10.0.0.5:8080" udp
-ovn-nbctl lb-add lb3 "192.168.2.5:8080" "10.0.0.6:8080"
-ovn-nbctl lb-add lb4 "192.168.2.6:8080" "10.0.0.7:8080"
+check ovn-nbctl lb-add lb1 "192.168.2.1:8080" "10.0.0.4:8080"
+check ovn-nbctl lb-add lb2 "192.168.2.4:8080" "10.0.0.5:8080" udp
+check ovn-nbctl lb-add lb3 "192.168.2.5:8080" "10.0.0.6:8080"
+check ovn-nbctl lb-add lb4 "192.168.2.6:8080" "10.0.0.7:8080"
 
-ovn-nbctl lr-lb-add lr0 lb1
-ovn-nbctl lr-lb-add lr0 lb2
-ovn-nbctl lr-lb-add lr0 lb3
-ovn-nbctl lr-lb-add lr0 lb4
+check ovn-nbctl lr-lb-add lr0 lb1
+check ovn-nbctl lr-lb-add lr0 lb2
+check ovn-nbctl lr-lb-add lr0 lb3
+check ovn-nbctl lr-lb-add lr0 lb4
 
-ovn-nbctl lr-nat-add lr0 snat 192.168.2.1 10.0.0.0/24
-ovn-nbctl lr-nat-add lr0 dnat_and_snat 192.168.2.4 10.0.0.4
+check ovn-nbctl lr-nat-add lr0 snat 192.168.2.1 10.0.0.0/24
+check ovn-nbctl lr-nat-add lr0 dnat_and_snat 192.168.2.4 10.0.0.4
 check ovn-nbctl --wait=sb lr-nat-add lr0 dnat 192.168.2.5 10.0.0.5
 
 ovn-sbctl dump-flows lr0 > sbflows
 AT_CAPTURE_FILE([sbflows])
 
-OVS_WAIT_UNTIL([test 1 = $(grep lr_in_unsnat sbflows | \
-grep "ip4 && ip4.dst == 192.168.2.1 && tcp && tcp.dst == 8080" -c) ])
-
-AT_CHECK([test 1 = $(grep lr_in_unsnat sbflows | \
-grep "ip4 && ip4.dst == 192.168.2.4 && udp && udp.dst == 8080" -c) ])
-
-AT_CHECK([test 1 = $(grep lr_in_unsnat sbflows | \
-grep "ip4 && ip4.dst == 192.168.2.5 && tcp && tcp.dst == 8080" -c) ])
-
-AT_CHECK([test 0 = $(grep lr_in_unsnat sbflows | \
-grep "ip4 && ip4.dst == 192.168.2.6 && tcp && tcp.dst == 8080" -c) ])
+# There shoule be no flows for LB VIPs in lr_in_unsnat if the VIP is not a
+# dnat_and_snat or snat entry.
+AT_CHECK([grep "lr_in_unsnat" sbflows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst == 192.168.2.1 && tcp && tcp.dst == 8080), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst == 192.168.2.4 && udp && udp.dst == 8080), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst == 192.168.2.5 && tcp && tcp.dst == 8080), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 192.168.2.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 192.168.2.4), action=(ct_snat;)
+])
 
 AT_CLEANUP
 ])
@@ -1458,8 +1457,8 @@ ovn-nbctl set logical_router lr0 options:dnat_force_snat_ip=192.168.2.3
 ovn-nbctl --wait=sb sync
 
 AT_CHECK([ovn-sbctl lflow-list lr0 | grep lr_in_unsnat | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst == 192.168.2.3), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst == 192.168.2.3), action=(ct_snat;)
 ])
 
 AT_CLEANUP
@@ -3163,14 +3162,28 @@ ovn-sbctl dump-flows lr0 > lr0flows
 AT_CAPTURE_FILE([lr0flows])
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
 ])
 
 AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
   table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(ct_dnat;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(ct_lb(backends=10.0.0.4:8080);)
-  table=6 (lr_in_dnat         ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(ct_lb(backends=10.0.0.4:8080);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
 ])
 
 check ovn-nbctl --wait=sb set logical_router lr0 options:lb_force_snat_ip="20.0.0.4 aef0::4"
@@ -3180,23 +3193,37 @@ AT_CAPTURE_FILE([lr0flows])
 
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst == 20.0.0.4), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(ip6 && ip6.dst == aef0::4), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(ip4 && ip4.dst == 20.0.0.4), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(ip6 && ip6.dst == aef0::4), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
 ])
 
 AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
   table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_dnat;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
-  table=6 (lr_in_dnat         ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
 ])
 
 AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
-  table=1 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=1 (lr_out_snat        ), priority=100  , match=(flags.force_snat_for_lb == 1 && ip4), action=(ct_snat(20.0.0.4);)
-  table=1 (lr_out_snat        ), priority=100  , match=(flags.force_snat_for_lb == 1 && ip6), action=(ct_snat(aef0::4);)
-  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=100  , match=(flags.force_snat_for_lb == 1 && ip4), action=(ct_snat(20.0.0.4);)
+  table=2 (lr_out_snat        ), priority=100  , match=(flags.force_snat_for_lb == 1 && ip6), action=(ct_snat(aef0::4);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
 ])
 
 check ovn-nbctl --wait=sb set logical_router lr0 options:lb_force_snat_ip="router_ip"
@@ -3208,25 +3235,39 @@ AT_CHECK([grep "lr_in_ip_input" lr0flows | grep "priority=60" | sort], [0], [dnl
 ])
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
 ])
 
 AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
   table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_dnat;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
-  table=6 (lr_in_dnat         ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
 ])
 
 AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
-  table=1 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
-  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
 ])
 
 check ovn-nbctl --wait=sb remove logical_router lr0 options chassis
@@ -3235,12 +3276,12 @@ ovn-sbctl dump-flows lr0 > lr0flows
 AT_CAPTURE_FILE([lr0flows])
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
 ])
 
 AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
-  table=1 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
 ])
 
 check ovn-nbctl set logical_router lr0 options:chassis=ch1
@@ -3250,27 +3291,41 @@ ovn-sbctl dump-flows lr0 > lr0flows
 AT_CAPTURE_FILE([lr0flows])
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
 ])
 
 AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
   table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_dnat;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
-  table=6 (lr_in_dnat         ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
 ])
 
 AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
-  table=1 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
-  table=1 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
-  table=1 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
 ])
 
 check ovn-nbctl --wait=sb lb-add lb2 10.0.0.20:80 10.0.0.40:8080
@@ -3280,20 +3335,35 @@ check ovn-nbctl --wait=sb lb-del lb1
 ovn-sbctl dump-flows lr0 > lr0flows
 
 AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
-  table=5 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
-  table=5 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.100), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip4.dst == 20.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw1" && ip6.dst == bef0::1), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.20 && tcp), action=(reg0 = 10.0.0.20; ct_dnat;)
 ])
 
 AT_CHECK([grep "lr_in_dnat" lr0flows | grep skip_snat_for_lb | sort], [0], [dnl
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.20 && tcp && tcp.dst == 80), action=(flags.skip_snat_for_lb = 1; ct_dnat;)
-  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.20 && tcp && tcp.dst == 80), action=(flags.skip_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.20 && ct_label.natted == 1 && tcp), action=(flags.skip_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.20 && tcp && tcp.dst == 80), action=(flags.skip_snat_for_lb = 1; ct_lb(backends=10.0.0.40:8080);)
 ])
 
 AT_CHECK([grep "lr_out_snat" lr0flows | grep skip_snat_for_lb | sort], [0], [dnl
-  table=1 (lr_out_snat        ), priority=120  , match=(flags.skip_snat_for_lb == 1 && ip), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(flags.skip_snat_for_lb == 1 && ip), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
 ])
 
 AT_CLEANUP
@@ -3787,3 +3857,454 @@ AT_CHECK([grep -w "ls_in_dhcp_options" sw0flows | sort], [0], [dnl
 
 AT_CLEANUP
 ])
+
+AT_SETUP([ovn -- LR NAT flows])
+ovn_start
+
+check ovn-nbctl \
+    -- ls-add sw0 \
+    -- lb-add lb0 10.0.0.10:80 10.0.0.4:8080 \
+    -- ls-lb-add sw0 lb0
+
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24
+check ovn-nbctl lsp-add sw0 sw0-lr0
+check ovn-nbctl lsp-set-type sw0-lr0 router
+check ovn-nbctl lsp-set-addresses sw0-lr0 00:00:00:00:ff:01
+check ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+])
+
+# Create few dnat_and_snat entries
+
+check ovn-nbctl lr-nat-add lr0 snat 172.168.0.10 10.0.0.0/24
+check ovn-nbctl lr-nat-add lr0 dnat_and_snat 172.168.0.20 10.0.0.3
+check ovn-nbctl lr-nat-add lr0 snat 172.168.0.30 10.0.0.10
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+])
+
+ovn-sbctl chassis-add gw1 geneve 127.0.0.1
+
+# Create a distributed gw port on lr0
+check ovn-nbctl ls-add public
+check ovn-nbctl lrp-add lr0 lr0-public 00:00:00:00:ff:02 172.168.0.10/24
+check ovn-nbctl lrp-set-gateway-chassis lr0-public gw1
+
+ovn-nbctl lsp-add public public-lr0 -- set Logical_Switch_Port public-lr0 \
+    type=router options:router-port=lr0-public \
+    -- lsp-set-addresses public-lr0 router
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.30 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=153  , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
+])
+
+# Associate load balancer to lr0
+
+check ovn-nbctl lb-add lb0 172.168.0.100:8082 "10.0.0.50:82,10.0.0.60:82"
+
+# No L4
+check ovn-nbctl lb-add lb1 172.168.0.200 "10.0.0.80,10.0.0.81"
+check ovn-nbctl lb-add lb2 172.168.0.210:60 "10.0.0.50:6062,10.0.0.60:6062" udp
+
+check ovn-nbctl lr-lb-add lr0 lb0
+check ovn-nbctl lr-lb-add lr0 lb1
+check ovn-nbctl lr-lb-add lr0 lb2
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=100  , match=(ip && ip4.dst == 172.168.0.30 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20 && inport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat(10.0.0.3);)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 172.168.0.200 && ct_label.natted == 1 && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 172.168.0.200 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.80,10.0.0.81);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.210 && ct_label.natted == 1 && udp && is_chassis_resident("cr-lr0-public")), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.4:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.100 && tcp && tcp.dst == 8082 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.210 && udp && udp.dst == 60 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.50:6062,10.0.0.60:6062);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=100  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.4 && tcp.src == 8080)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && tcp.src == 82) || (ip4.src == 10.0.0.60 && tcp.src == 82)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.50 && udp.src == 6062) || (ip4.src == 10.0.0.60 && udp.src == 6062)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=120  , match=(ip4 && ((ip4.src == 10.0.0.80) || (ip4.src == 10.0.0.81)) && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=153  , match=(ip && ip4.src == 10.0.0.0/24 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.10 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=161  , match=(ip && ip4.src == 10.0.0.3 && outport == "lr0-public" && is_chassis_resident("cr-lr0-public")), action=(ct_snat(172.168.0.20);)
+])
+
+# Make the logical router as Gateway router
+check ovn-nbctl clear logical_router_port lr0-public gateway_chassis
+check ovn-nbctl set logical_router lr0 options:chassis=gw1
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.20), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.30), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 172.168.0.200 && ct_label.natted == 1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 172.168.0.200), action=(ct_lb(backends=10.0.0.80,10.0.0.81);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.210 && ct_label.natted == 1 && udp), action=(next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(ct_lb(backends=10.0.0.4:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.100 && tcp && tcp.dst == 8082), action=(ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.210 && udp && udp.dst == 60), action=(ct_lb(backends=10.0.0.50:6062,10.0.0.60:6062);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.3), action=(ct_snat(172.168.0.20);)
+])
+
+# Set lb force snat logical router.
+check ovn-nbctl --wait=sb set logical_router lr0 options:lb_force_snat_ip="router_ip"
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.20), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.30), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 172.168.0.200 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.80,10.0.0.81);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.210 && ct_label.natted == 1 && udp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.100 && tcp && tcp.dst == 8082), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.210 && udp && udp.dst == 60), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062,10.0.0.60:6062);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.3), action=(ct_snat(172.168.0.20);)
+])
+
+# Add a LB VIP same as router ip.
+check ovn-nbctl lb-add lb0 172.168.0.10:9082 "10.0.0.50:82,10.0.0.60:82"
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst == 172.168.0.10 && tcp && tcp.dst == 9082), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.20), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.30), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && tcp), action=(reg0 = 172.168.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 172.168.0.200 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.80,10.0.0.81);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.210 && ct_label.natted == 1 && udp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.10 && tcp && tcp.dst == 9082), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.100 && tcp && tcp.dst == 8082), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.210 && udp && udp.dst == 60), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062,10.0.0.60:6062);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.3), action=(ct_snat(172.168.0.20);)
+])
+
+# Add IPv6 router port and LB.
+check ovn-nbctl lrp-del lr0-sw0
+check ovn-nbctl lrp-del lr0-public
+check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 aef0::1
+check ovn-nbctl lrp-add lr0 lr0-public 00:00:00:00:ff:02 172.168.0.10/24 def0::10
+
+lb1_uuid=$(fetch_column nb:Load_Balancer _uuid name=lb1)
+ovn-nbctl set load_balancer $lb1_uuid vips:'"[[def0::2]]:8000"'='"@<:@aef0::2@:>@:80,@<:@aef0::3@:>@:80"'
+
+ovn-nbctl list load_Balancer
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl dump-flows lr0 > lr0flows
+AT_CAPTURE_FILE([lr0flows])
+
+AT_CHECK([grep "lr_in_unsnat" lr0flows | sort], [0], [dnl
+  table=4 (lr_in_unsnat       ), priority=0    , match=(1), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-public" && ip6.dst == def0::10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip4.dst == 10.0.0.1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=110  , match=(inport == "lr0-sw0" && ip6.dst == aef0::1), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=120  , match=(ip4 && ip4.dst == 172.168.0.10 && tcp && tcp.dst == 9082), action=(next;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.10), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.20), action=(ct_snat;)
+  table=4 (lr_in_unsnat       ), priority=90   , match=(ip && ip4.dst == 172.168.0.30), action=(ct_snat;)
+])
+
+AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], [dnl
+  table=5 (lr_in_defrag       ), priority=0    , match=(1), action=(next;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 10.0.0.10 && tcp), action=(reg0 = 10.0.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.10 && tcp), action=(reg0 = 172.168.0.10; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.100 && tcp), action=(reg0 = 172.168.0.100; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.200), action=(reg0 = 172.168.0.200; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip4.dst == 172.168.0.210 && udp), action=(reg0 = 172.168.0.210; ct_dnat;)
+  table=5 (lr_in_defrag       ), priority=100  , match=(ip && ip6.dst == def0::2 && tcp), action=(xxreg0 = def0::2; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
+  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
+  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && reg0 == 172.168.0.200 && ct_label.natted == 1), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.80,10.0.0.81);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.10 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.100 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 172.168.0.210 && ct_label.natted == 1 && udp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip6 && xxreg0 == def0::2 && ct_label.natted == 1 && tcp), action=(flags.force_snat_for_lb = 1; next;)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.4:8080);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.10 && tcp && tcp.dst == 9082), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.100 && tcp && tcp.dst == 8082), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:82,10.0.0.60:82);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 172.168.0.210 && udp && udp.dst == 60), action=(flags.force_snat_for_lb = 1; ct_lb(backends=10.0.0.50:6062,10.0.0.60:6062);)
+  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip6 && xxreg0 == def0::2 && tcp && tcp.dst == 8000), action=(flags.force_snat_for_lb = 1; ct_lb(backends=[[aef0::2]]:80,[[aef0::3]]:80);)
+])
+
+AT_CHECK([grep "lr_out_undnat" lr0flows | sort], [0], [dnl
+  table=0 (lr_out_undnat      ), priority=0    , match=(1), action=(next;)
+  table=0 (lr_out_undnat      ), priority=50   , match=(ip), action=(flags.loopback = 1; ct_dnat;)
+])
+
+AT_CHECK([grep "lr_out_post_undnat" lr0flows | sort], [0], [dnl
+  table=1 (lr_out_post_undnat ), priority=0    , match=(1), action=(next;)
+  table=1 (lr_out_post_undnat ), priority=50   , match=(ip && ct.new), action=(ct_commit { } ; next; )
+])
+
+AT_CHECK([grep "lr_out_snat" lr0flows | sort], [0], [dnl
+  table=2 (lr_out_snat        ), priority=0    , match=(1), action=(next;)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), action=(ct_snat(def0::10);)
+  table=2 (lr_out_snat        ), priority=110  , match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
+  table=2 (lr_out_snat        ), priority=120  , match=(nd_ns), action=(next;)
+  table=2 (lr_out_snat        ), priority=25   , match=(ip && ip4.src == 10.0.0.0/24), action=(ct_snat(172.168.0.10);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.10), action=(ct_snat(172.168.0.30);)
+  table=2 (lr_out_snat        ), priority=33   , match=(ip && ip4.src == 10.0.0.3), action=(ct_snat(172.168.0.20);)
+])
+
+AT_CLEANUP
+])
diff --git a/tests/ovn.at b/tests/ovn.at
index 8fd200ccaed4..ddbe7ffa1570 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -20524,7 +20524,7 @@ AT_CAPTURE_FILE([sbflows])
 AT_CHECK([for regex in ct_snat ct_dnat ip4.dst= ip4.src=; do
   grep -c "$regex" sbflows;
 done], [0], [0
-0
+1
 2
 2
 ])
@@ -20688,7 +20688,7 @@ AT_CAPTURE_FILE([sbflows2])
 OVS_WAIT_FOR_OUTPUT(
   [ovn-sbctl dump-flows > sbflows2
    ovn-sbctl dump-flows lr0 | grep ct_lb | grep priority=120 | sed 's/table=..//'], 0,
-  [  (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
+  [  (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_lb(backends=10.0.0.3:80,20.0.0.3:80; hash_fields="ip_dst,ip_src,tcp_dst,tcp_src");)
 ])
 
 # get the svc monitor mac.
@@ -20729,8 +20729,8 @@ AT_CHECK(
 AT_CAPTURE_FILE([sbflows4])
 ovn-sbctl dump-flows lr0 > sbflows4
 AT_CHECK([grep lr_in_dnat sbflows4 | grep priority=120 | sed 's/table=..//' | sort], [0], [dnl
-  (lr_in_dnat         ), priority=120  , match=(ct.est && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(ct_dnat;)
-  (lr_in_dnat         ), priority=120  , match=(ct.new && ip && ip4.dst == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(drop;)
+  (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && reg0 == 10.0.0.10 && ct_label.natted == 1 && tcp && is_chassis_resident("cr-lr0-public")), action=(next;)
+  (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && reg0 == 10.0.0.10 && tcp && tcp.dst == 80 && is_chassis_resident("cr-lr0-public")), action=(drop;)
 ])
 
 # Delete sw0-p1
diff --git a/tests/system-ovn.at b/tests/system-ovn.at
index a5f10ce38af8..c1875ac0443f 100644
--- a/tests/system-ovn.at
+++ b/tests/system-ovn.at
@@ -116,6 +116,7 @@ NS_CHECK_EXEC([alice1], [ping -q -c 3 -i 0.3 -w 2 30.0.0.2 | FORMAT_PING], \
 # Check conntrack entries.
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.2) | \
 sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmp,orig=(src=172.16.1.2,dst=192.168.1.2,id=<cleared>,type=8,code=0),reply=(src=192.168.1.2,dst=172.16.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
 icmp,orig=(src=172.16.1.2,dst=30.0.0.2,id=<cleared>,type=8,code=0),reply=(src=192.168.1.2,dst=172.16.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
@@ -298,6 +299,7 @@ NS_CHECK_EXEC([alice1], [ping6 -q -c 3 -i 0.3 -w 2 fd30::2 | FORMAT_PING], \
 # Check conntrack entries.
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd21::2) | \
 sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd21::2,dst=fd11::2,id=<cleared>,type=128,code=0),reply=(src=fd11::2,dst=fd21::2,id=<cleared>,type=129,code=0),zone=<cleared>
 icmpv6,orig=(src=fd21::2,dst=fd30::2,id=<cleared>,type=128,code=0),reply=(src=fd11::2,dst=fd21::2,id=<cleared>,type=129,code=0),zone=<cleared>
 ])
 
@@ -2197,11 +2199,12 @@ tcp,orig=(src=172.16.1.2,dst=30.0.0.2,sport=<cleared>,dport=<cleared>),reply=(sr
 ])
 
 check_est_flows () {
-    n=$(ovs-ofctl dump-flows br-int table=15 | grep \
-"priority=120,ct_state=+est+trk,tcp,metadata=0x2,nw_dst=30.0.0.2,tp_dst=8000" \
-| grep nat | sed -n 's/.*n_packets=\([[0-9]]\{1,\}\).*/\1/p')
+    n=$(ovs-ofctl dump-flows br-int table=13 | grep \
+"priority=100,tcp,metadata=0x2,nw_dst=30.0.0.2" | grep nat |
+sed -n 's/.*n_packets=\([[0-9]]\{1,\}\).*/\1/p')
 
     echo "n_packets=$n"
+    test ! -z $n
     test "$n" != 0
 }
 
@@ -2222,7 +2225,7 @@ ovn-nbctl set load_balancer $uuid vips:'"30.0.0.2:8000"'='"192.168.1.2:80,192.16
 
 ovn-nbctl list load_balancer
 ovn-sbctl dump-flows R2
-OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=41 | \
+OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=42 | \
 grep 'nat(src=20.0.0.2)'])
 
 dnl Test load-balancing that includes L4 ports in NAT.
@@ -2260,7 +2263,7 @@ ovn-nbctl set load_balancer $uuid vips:'"30.0.0.2:8000"'='"192.168.1.2:80,192.16
 
 ovn-nbctl list load_balancer
 ovn-sbctl dump-flows R2
-OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=41 | \
+OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-flows br-int table=42 | \
 grep 'nat(src=20.0.0.2)'])
 
 rm -f wget*.log
@@ -3859,33 +3862,42 @@ NS_CHECK_EXEC([foo1], [ping -q -c 3 -i 0.3 -w 2 192.168.2.2 | FORMAT_PING], \
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(192.168.2.2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmp,orig=(src=192.168.1.2,dst=192.168.2.2,id=<cleared>,type=8,code=0),reply=(src=192.168.2.2,dst=192.168.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West No NAT: 'foo2' pings 'bar1' using 192.168.2.2.
 NS_CHECK_EXEC([foo2], [ping -q -c 3 -i 0.3 -w 2 192.168.2.2 | FORMAT_PING], \
 [0], [dnl
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(192.168.2.2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmp,orig=(src=192.168.1.3,dst=192.168.2.2,id=<cleared>,type=8,code=0),reply=(src=192.168.2.2,dst=192.168.1.3,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West No NAT: 'bar1' pings 'foo2' using 192.168.1.3.
 NS_CHECK_EXEC([bar1], [ping -q -c 3 -i 0.3 -w 2 192.168.1.3 | FORMAT_PING], \
 [0], [dnl
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(192.168.2.2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmp,orig=(src=192.168.2.2,dst=192.168.1.3,id=<cleared>,type=8,code=0),reply=(src=192.168.1.3,dst=192.168.2.2,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West NAT: 'foo1' pings 'bar1' using 172.16.1.4.
 NS_CHECK_EXEC([foo1], [ping -q -c 3 -i 0.3 -w 2 172.16.1.4 | FORMAT_PING], \
 [0], [dnl
@@ -3898,6 +3910,7 @@ AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.4) | \
 sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
 icmp,orig=(src=172.16.1.1,dst=172.16.1.4,id=<cleared>,type=8,code=0),reply=(src=192.168.2.2,dst=172.16.1.1,id=<cleared>,type=0,code=0),zone=<cleared>
 icmp,orig=(src=192.168.1.2,dst=172.16.1.4,id=<cleared>,type=8,code=0),reply=(src=172.16.1.4,dst=172.16.1.1,id=<cleared>,type=0,code=0),zone=<cleared>
+icmp,orig=(src=192.168.1.2,dst=172.16.1.4,id=<cleared>,type=8,code=0),reply=(src=172.16.1.4,dst=192.168.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
 AT_CHECK([ovs-appctl dpctl/flush-conntrack])
@@ -4043,33 +4056,42 @@ NS_CHECK_EXEC([foo1], [ping -q -c 3 -i 0.3 -w 2 fd12::2 | FORMAT_PING], \
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd12::2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd11::2,dst=fd12::2,id=<cleared>,type=128,code=0),reply=(src=fd12::2,dst=fd11::2,id=<cleared>,type=129,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West No NAT: 'foo2' pings 'bar1' using fd12::2.
 NS_CHECK_EXEC([foo2], [ping -q -c 3 -i 0.3 -w 2 fd12::2 | FORMAT_PING], \
 [0], [dnl
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd12::2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd11::3,dst=fd12::2,id=<cleared>,type=128,code=0),reply=(src=fd12::2,dst=fd11::3,id=<cleared>,type=129,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West No NAT: 'bar1' pings 'foo2' using fd11::3.
 NS_CHECK_EXEC([bar1], [ping -q -c 3 -i 0.3 -w 2 fd11::3 | FORMAT_PING], \
 [0], [dnl
 3 packets transmitted, 3 received, 0% packet loss, time 0ms
 ])
 
-# We verify that no NAT happened via 'dump-conntrack' command.
+# We verify that the connection is tracked but not NATted. This is due to the
+# unDNAT table in the egress router pipeline
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd12::2) | \
-sed -e 's/zone=[[0-9]]*/zone=<cleared>/' | wc -l], [0], [0
+sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd12::2,dst=fd11::3,id=<cleared>,type=128,code=0),reply=(src=fd11::3,dst=fd12::2,id=<cleared>,type=129,code=0),zone=<cleared>
 ])
 
+AT_CHECK([ovs-appctl dpctl/flush-conntrack])
 # East-West NAT: 'foo1' pings 'bar1' using fd20::4.
 NS_CHECK_EXEC([foo1], [ping -q -c 3 -i 0.3 -w 2 fd20::4 | FORMAT_PING], \
 [0], [dnl
@@ -4080,6 +4102,7 @@ NS_CHECK_EXEC([foo1], [ping -q -c 3 -i 0.3 -w 2 fd20::4 | FORMAT_PING], \
 # Then DNAT of 'bar1' address happens (listed first below).
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd20::4) | \
 sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
+icmpv6,orig=(src=fd11::2,dst=fd20::4,id=<cleared>,type=128,code=0),reply=(src=fd20::4,dst=fd11::2,id=<cleared>,type=129,code=0),zone=<cleared>
 icmpv6,orig=(src=fd11::2,dst=fd20::4,id=<cleared>,type=128,code=0),reply=(src=fd20::4,dst=fd20::1,id=<cleared>,type=129,code=0),zone=<cleared>
 icmpv6,orig=(src=fd20::1,dst=fd20::4,id=<cleared>,type=128,code=0),reply=(src=fd12::2,dst=fd20::1,id=<cleared>,type=129,code=0),zone=<cleared>
 ])
@@ -6010,6 +6033,7 @@ NS_CHECK_EXEC([sw01-x], [ping -q -c 3 -i 0.3 -w 2 172.16.1.100 | FORMAT_PING], \
 AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.100) | \
 sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
 icmp,orig=(src=192.168.1.2,dst=172.16.1.100,id=<cleared>,type=8,code=0),reply=(src=172.16.1.100,dst=172.16.1.20,id=<cleared>,type=0,code=0),zone=<cleared>
+icmp,orig=(src=192.168.1.2,dst=172.16.1.100,id=<cleared>,type=8,code=0),reply=(src=172.16.1.100,dst=192.168.1.2,id=<cleared>,type=0,code=0),zone=<cleared>
 ])
 
 OVS_APP_EXIT_AND_WAIT([ovn-controller])
-- 
2.27.0



More information about the dev mailing list