[ovs-dev] [PATCH RFC ovn 1/1] RFC: Logical flow generation in ovn-controller

Mark Michelson mmichels at redhat.com
Mon Jul 12 15:08:32 UTC 2021


Full disclosure: I have not looked at all the details in this patch, 
since it is quite large. However, I felt I should comment on the idea.

The memory savings in ovn-northd and the southbound database are quite 
nice. It's to be expected since the southbound database has fewer 
logical flows, and ovn-northd doesn't have to calculate logical flows. I 
think it's telling that the scale tests didn't show any noticeable 
improvement in performance.

I think one thing that could help would be to just skip logical flow 
creation altogether in ovn-controller. It doesn't make much sense to 
take the source data, translate it into logical flow syntax, just to run 
it through the expression parser and translate it into OpenFlow. You 
could presumably create OpenFlow matches and actions directly, saving 
lots of processing. Of course, this takes this 
semi-backwards-incompatible change and makes it wholly 
backwards-incompatible :)



I'm concerned about a few things in the patch as presented.

First, moving logical flow generation from ovn-northd to ovn-controller 
seems like it's just shifting the work from one place to another. And in 
the case where datapaths have ports bound on many nodes, it means a lot 
of the same work is being done on multiple ovn-controller instances. I-P 
in ovn-controller could reduce the amount of work ovn-controller is 
performing per iteration, but it also introduces error potential.

Second, is there a danger in moving logical flows out of the southbound 
database? You modified `ovn-sbctl lflow-list` to do some magic to list 
the logical flows, but are there any CMS's that access the Logical_Flow 
table directly in the SB DB for any reason?

Third, does ovn-trace still work?

Overall, I'm fine with the idea of re-architecting things in OVN, but I 
think it requires a bit more of a plan than this. A change like this is 
doing its best to hide that anything has changed, when behind the scenes 
things have changed a lot. I think a true rearchitecting will require 
user-facing changes, too. If we still used major-minor versioning for 
OVN, this would be the sort of thing that would result in a major 
version bump.


On 6/25/21 7:31 PM, numans at ovn.org wrote:
> From: Numan Siddique <numans at ovn.org>
> 
> This is a an RFC patch to move the logical flow generation from
> ovn-northd to ovn-controller.
> 
> This patch doesn't move all the generation to ovn-controller.
> ovn-northd still does the flow generation for
>    - ACLs/Port groups.
>    - DHCP options
>    - Multicast groups
>    - And few others.
> 
> Other than ACLs and Port groups it would be possible to move
> flow generation for the above mentioned things.  But before doing
> all that, it is worth to evaluate if the proposed RFC makes sense.
> 
> The main motivation for this RFC effort is to
>    - Address scale issues seen.  For large scale deployments
>      ovn-northd takes lot of CPU for the computation (ovn-northd-ddlog
>      should help here) and memory and so does Southbound ovsdb-servers.
> 
>    - Having a very huge southbound DB and logical flows affects the
>      raft consenses and it requires increasing the raft election timers.
> 
>    - Logical flows contributes majorly to the overall south bound DB.
> 
> This RFC demonstrates that it is possible for each ovn-controller
> to generate logical flows.
> 
> These are some of the findings with my general and scale testing.
> 
> Below are the test findings with a huge pre-existing Northbound database with
> datapath groups enabled with a size of 13 M
>    - Southbound DB size is:
>           * with ovn-northd-master - 35 M
>           * with ovn-northd-proposed-rfc - 12M
> 
>    - Number of logical flows:
>           * with ovn-northd-master - 78581
>           * with ovn-northd-proposed-rfc - 7933
> 
>    - RSS Memory consumption of
>          * ovn-northd-master -       441368 KiB
>          * ovn-northd-proposed-rfc - 115540 KiB
> 
>          * ovn-controller-master -       1267716 KiB
>          * ovn-controller-proposed-rfc - 915876 KiB
> 
>          * SB ovsdb-server-with-ovn-master -       612296 KiB
>          * SB ovsdb-server-with-proposed-rfc-ovn - 134680 KiB
> 
> With the scale testing of 500 fake multinode nodes and each node
> creating having a few port bindings claimed,  the end result is
> almost similar.  No signifcant improvements seen with the proposed
> RFC patch.  The results are identical.
> 
> I think more scale testing needs to be done to determine if
> the CPU usage and memory usage reduction in the ovn-northd and
> ovsdb-servers will have a major impact or not.  Testing with
> a real Kubernetes/Openstack deployments would help.
> 
> Few Observations
>    -  It is possible to move the flow generation to each ovn-controller.
> 
>    -  Each ovn-controller only generates the logical flows if required
>       i.e if the datapath is in the 'local_datapaths'.
> 
>    -  This RFC patch do complicate ovn-controller code which has already
>       many complicated bits.
> 
>    -  I was expecting the scale test results to improve and the
>       end-to-end time of a pod/VM creation would be quicker. But that is
>       not the case, which is a disappointment.
> 
> Submitting this RFC patch to get a feed back and have conversation
> if it is worth the effort.
> 
> Signed-off-by: Numan Siddique <numans at ovn.org>
> ---
>   controller/automake.mk      |    4 +-
>   controller/binding.c        |  379 +---
>   controller/binding.h        |   13 -
>   controller/lflow-generate.c |  179 ++
>   controller/lflow-generate.h |   49 +
>   controller/lflow.c          |  421 +++--
>   controller/lflow.h          |    8 +
>   controller/lport.c          |   16 -
>   controller/lport.h          |    4 -
>   controller/ovn-controller.c |  674 ++++++-
>   controller/ovn-controller.h |   34 +-
>   controller/patch.c          |    1 +
>   controller/physical.c       |   58 +-
>   controller/pinctrl.c        |   18 +-
>   lib/automake.mk             |    6 +-
>   lib/lb.c                    |   27 +
>   lib/lb.h                    |    2 +
>   lib/ldata.c                 |  895 +++++++++
>   lib/ldata.h                 |  251 +++
>   lib/lflow.c                 | 3514 +++++++++++++++++++++++++++++++++++
>   lib/lflow.h                 |  333 ++++
>   lib/ovn-util.c              |   83 +
>   lib/ovn-util.h              |   32 +
>   northd/ovn-northd.c         | 3359 +++------------------------------
>   ovn-sb.ovsschema            |   16 +-
>   ovn-sb.xml                  |   16 +
>   utilities/ovn-dbctl.c       |    7 +-
>   utilities/ovn-dbctl.h       |    3 +-
>   utilities/ovn-sbctl.c       |  256 +++
>   29 files changed, 6980 insertions(+), 3678 deletions(-)
>   create mode 100644 controller/lflow-generate.c
>   create mode 100644 controller/lflow-generate.h
>   create mode 100644 lib/ldata.c
>   create mode 100644 lib/ldata.h
>   create mode 100644 lib/lflow.c
>   create mode 100644 lib/lflow.h
> 
> diff --git a/controller/automake.mk b/controller/automake.mk
> index 2f6c508907..7b410c93ff 100644
> --- a/controller/automake.mk
> +++ b/controller/automake.mk
> @@ -33,7 +33,9 @@ controller_ovn_controller_SOURCES = \
>   	controller/physical.c \
>   	controller/physical.h \
>   	controller/mac-learn.c \
> -	controller/mac-learn.h
> +	controller/mac-learn.h \
> +	controller/lflow-generate.c \
> +	controller/lflow-generate.h
>   
>   controller_ovn_controller_LDADD = lib/libovn.la $(OVS_LIBDIR)/libopenvswitch.la
>   man_MANS += controller/ovn-controller.8
> diff --git a/controller/binding.c b/controller/binding.c
> index 7fde0fdbb9..0fd1cc4a47 100644
> --- a/controller/binding.c
> +++ b/controller/binding.c
> @@ -14,13 +14,8 @@
>    */
>   
>   #include <config.h>
> -#include "binding.h"
> -#include "ha-chassis.h"
> -#include "if-status.h"
> -#include "lflow.h"
> -#include "lport.h"
> -#include "patch.h"
>   
> +/* OVS includes. */
>   #include "lib/bitmap.h"
>   #include "openvswitch/poll-loop.h"
>   #include "lib/sset.h"
> @@ -29,9 +24,20 @@
>   #include "lib/vswitch-idl.h"
>   #include "openvswitch/hmap.h"
>   #include "openvswitch/vlog.h"
> -#include "lib/chassis-index.h"
>   #include "lib/ovn-sb-idl.h"
> +
> +/* OVN includes. */
> +#include "binding.h"
> +#include "ha-chassis.h"
> +#include "if-status.h"
> +#include "lflow.h"
> +#include "lib/chassis-index.h"
> +#include "lib/ldata.h"
> +#include "lib/ovn-util.h"
> +#include "lport.h"
>   #include "ovn-controller.h"
> +#include "patch.h"
> +
>   
>   VLOG_DEFINE_THIS_MODULE(binding);
>   
> @@ -76,105 +82,26 @@ binding_register_ovs_idl(struct ovsdb_idl *ovs_idl)
>       ovsdb_idl_add_column(ovs_idl, &ovsrec_qos_col_type);
>   }
>   
> -static struct tracked_binding_datapath *tracked_binding_datapath_create(
> -    const struct sbrec_datapath_binding *,
> -    bool is_new, struct hmap *tracked_dps);
> -static struct tracked_binding_datapath *tracked_binding_datapath_find(
> -    struct hmap *, const struct sbrec_datapath_binding *);
> -static void tracked_binding_datapath_lport_add(
> -    const struct sbrec_port_binding *, struct hmap *tracked_datapaths);
>   static void update_lport_tracking(const struct sbrec_port_binding *pb,
>                                     struct hmap *tracked_dp_bindings);
>   
> -static void
> -add_local_datapath__(struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> -                     struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> -                     struct ovsdb_idl_index *sbrec_port_binding_by_name,
> -                     const struct sbrec_datapath_binding *datapath,
> -                     bool has_local_l3gateway, int depth,
> -                     struct hmap *local_datapaths,
> -                     struct hmap *tracked_datapaths)
> -{
> -    uint32_t dp_key = datapath->tunnel_key;
> -    struct local_datapath *ld = get_local_datapath(local_datapaths, dp_key);
> -    if (ld) {
> -        if (has_local_l3gateway) {
> -            ld->has_local_l3gateway = true;
> -        }
> -        return;
> -    }
> -
> -    ld = xzalloc(sizeof *ld);
> -    hmap_insert(local_datapaths, &ld->hmap_node, dp_key);
> -    ld->datapath = datapath;
> -    ld->localnet_port = NULL;
> -    ld->has_local_l3gateway = has_local_l3gateway;
> -
> -    if (tracked_datapaths) {
> -        struct tracked_binding_datapath *tdp =
> -            tracked_binding_datapath_find(tracked_datapaths, datapath);
> -        if (!tdp) {
> -            tracked_binding_datapath_create(datapath, true, tracked_datapaths);
> -        } else {
> -            /* Its possible that there is already an entry in tracked datapaths
> -             * for this 'datapath'. tracked_binding_datapath_lport_add() may
> -             * have created it. Since the 'datapath' is added to the
> -             * local datapaths, set 'tdp->is_new' to true so that the flows
> -             * for this datapath are programmed properly.
> -             * */
> -            tdp->is_new = true;
> -        }
> -    }
> +struct local_datpath_added_aux {
> +    bool has_local_l3gateway;
> +    struct hmap *tracked_datapaths;
> +};
>   
> -    if (depth >= 100) {
> -        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> -        VLOG_WARN_RL(&rl, "datapaths nested too deep");
> -        return;
> +/* This function is called by local_datapath_add() if a new local_datapath
> + * is created. */
> +static void
> +local_datapath_added(struct local_datapath *ld, void *aux)
> +{
> +    struct local_datpath_added_aux *aux_ = aux;
> +    if (aux_->tracked_datapaths) {
> +        tracked_datapath_add(ld->datapath, TRACKED_RESOURCE_NEW,
> +                             aux_->tracked_datapaths);
>       }
>   
> -    struct sbrec_port_binding *target =
> -        sbrec_port_binding_index_init_row(sbrec_port_binding_by_datapath);
> -    sbrec_port_binding_index_set_datapath(target, datapath);
> -
> -    const struct sbrec_port_binding *pb;
> -    SBREC_PORT_BINDING_FOR_EACH_EQUAL (pb, target,
> -                                       sbrec_port_binding_by_datapath) {
> -        if (!strcmp(pb->type, "patch") || !strcmp(pb->type, "l3gateway")) {
> -            const char *peer_name = smap_get(&pb->options, "peer");
> -            if (peer_name) {
> -                const struct sbrec_port_binding *peer;
> -
> -                peer = lport_lookup_by_name(sbrec_port_binding_by_name,
> -                                            peer_name);
> -
> -                if (peer && peer->datapath) {
> -                    if (!strcmp(pb->type, "patch")) {
> -                        /* Add the datapath to local datapath only for patch
> -                         * ports. For l3gateway ports, since gateway router
> -                         * resides on one chassis, we don't need to add.
> -                         * Otherwise, all other chassis might create patch
> -                         * ports between br-int and the provider bridge. */
> -                        add_local_datapath__(sbrec_datapath_binding_by_key,
> -                                             sbrec_port_binding_by_datapath,
> -                                             sbrec_port_binding_by_name,
> -                                             peer->datapath, false,
> -                                             depth + 1, local_datapaths,
> -                                             tracked_datapaths);
> -                    }
> -                    ld->n_peer_ports++;
> -                    if (ld->n_peer_ports > ld->n_allocated_peer_ports) {
> -                        ld->peer_ports =
> -                            x2nrealloc(ld->peer_ports,
> -                                       &ld->n_allocated_peer_ports,
> -                                       sizeof *ld->peer_ports);
> -                    }
> -                    ld->peer_ports[ld->n_peer_ports - 1].local = pb;
> -                    ld->peer_ports[ld->n_peer_ports - 1].remote = peer;
> -                }
> -            }
> -        }
> -    }
> -    sbrec_port_binding_index_destroy_row(target);
> +    ld->has_local_l3gateway = aux_->has_local_l3gateway;
>   }
>   
>   static void
> @@ -185,11 +112,17 @@ add_local_datapath(struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
>                      bool has_local_l3gateway, struct hmap *local_datapaths,
>                      struct hmap *tracked_datapaths)
>   {
> -    add_local_datapath__(sbrec_datapath_binding_by_key,
> -                         sbrec_port_binding_by_datapath,
> -                         sbrec_port_binding_by_name,
> -                         datapath, has_local_l3gateway, 0, local_datapaths,
> -                         tracked_datapaths);
> +    struct local_datpath_added_aux aux = {
> +        .has_local_l3gateway = has_local_l3gateway,
> +        .tracked_datapaths = tracked_datapaths,
> +    };
> +
> +    local_datapath_add(local_datapaths, datapath,
> +                       sbrec_datapath_binding_by_key,
> +                       sbrec_port_binding_by_datapath,
> +                       sbrec_port_binding_by_name,
> +                       local_datapath_added,
> +                       &aux);
>   }
>   
>   static void
> @@ -546,7 +479,8 @@ update_local_lport_ids(const struct sbrec_port_binding *pb,
>   
>           if (b_ctx->tracked_dp_bindings) {
>               /* Add the 'pb' to the tracked_datapaths. */
> -            tracked_binding_datapath_lport_add(pb, b_ctx->tracked_dp_bindings);
> +            tracked_datapath_lport_add(pb, TRACKED_RESOURCE_NEW,
> +                                       b_ctx->tracked_dp_bindings);
>           }
>       }
>   }
> @@ -566,27 +500,11 @@ remove_local_lport_ids(const struct sbrec_port_binding *pb,
>   
>           if (b_ctx->tracked_dp_bindings) {
>               /* Add the 'pb' to the tracked_datapaths. */
> -            tracked_binding_datapath_lport_add(pb, b_ctx->tracked_dp_bindings);
> -        }
> -    }
> -}
> -
> -/* Corresponds to each Port_Binding.type. */
> -enum en_lport_type {
> -    LP_UNKNOWN,
> -    LP_VIF,
> -    LP_CONTAINER,
> -    LP_PATCH,
> -    LP_L3GATEWAY,
> -    LP_LOCALNET,
> -    LP_LOCALPORT,
> -    LP_L2GATEWAY,
> -    LP_VTEP,
> -    LP_CHASSISREDIRECT,
> -    LP_VIRTUAL,
> -    LP_EXTERNAL,
> -    LP_REMOTE
> -};
> +            tracked_datapath_lport_add(pb, TRACKED_RESOURCE_REMOVED,
> +                                       b_ctx->tracked_dp_bindings);
> +        }
> +    }
> +}
>   
>   /* Local bindings. binding.c module binds the logical port (represented by
>    * Port_Binding rows) and sets the 'chassis' column when it sees the
> @@ -865,113 +783,6 @@ binding_dump_local_bindings(struct local_binding_data *lbinding_data,
>       free(nodes);
>   }
>   
> -static bool
> -is_lport_vif(const struct sbrec_port_binding *pb)
> -{
> -    return !pb->type[0];
> -}
> -
> -static struct tracked_binding_datapath *
> -tracked_binding_datapath_create(const struct sbrec_datapath_binding *dp,
> -                                bool is_new,
> -                                struct hmap *tracked_datapaths)
> -{
> -    struct tracked_binding_datapath *t_dp = xzalloc(sizeof *t_dp);
> -    t_dp->dp = dp;
> -    t_dp->is_new = is_new;
> -    shash_init(&t_dp->lports);
> -    hmap_insert(tracked_datapaths, &t_dp->node, uuid_hash(&dp->header_.uuid));
> -    return t_dp;
> -}
> -
> -static struct tracked_binding_datapath *
> -tracked_binding_datapath_find(struct hmap *tracked_datapaths,
> -                              const struct sbrec_datapath_binding *dp)
> -{
> -    struct tracked_binding_datapath *t_dp;
> -    size_t hash = uuid_hash(&dp->header_.uuid);
> -    HMAP_FOR_EACH_WITH_HASH (t_dp, node, hash, tracked_datapaths) {
> -        if (uuid_equals(&t_dp->dp->header_.uuid, &dp->header_.uuid)) {
> -            return t_dp;
> -        }
> -    }
> -
> -    return NULL;
> -}
> -
> -static void
> -tracked_binding_datapath_lport_add(const struct sbrec_port_binding *pb,
> -                                   struct hmap *tracked_datapaths)
> -{
> -    if (!tracked_datapaths) {
> -        return;
> -    }
> -
> -    struct tracked_binding_datapath *tracked_dp =
> -        tracked_binding_datapath_find(tracked_datapaths, pb->datapath);
> -    if (!tracked_dp) {
> -        tracked_dp = tracked_binding_datapath_create(pb->datapath, false,
> -                                                     tracked_datapaths);
> -    }
> -
> -    /* Check if the lport is already present or not.
> -     * If it is already present, then just update the 'pb' field. */
> -    struct tracked_binding_lport *lport =
> -        shash_find_data(&tracked_dp->lports, pb->logical_port);
> -
> -    if (!lport) {
> -        lport = xmalloc(sizeof *lport);
> -        shash_add(&tracked_dp->lports, pb->logical_port, lport);
> -    }
> -
> -    lport->pb = pb;
> -}
> -
> -void
> -binding_tracked_dp_destroy(struct hmap *tracked_datapaths)
> -{
> -    struct tracked_binding_datapath *t_dp;
> -    HMAP_FOR_EACH_POP (t_dp, node, tracked_datapaths) {
> -        shash_destroy_free_data(&t_dp->lports);
> -        free(t_dp);
> -    }
> -
> -    hmap_destroy(tracked_datapaths);
> -}
> -
> -static enum en_lport_type
> -get_lport_type(const struct sbrec_port_binding *pb)
> -{
> -    if (is_lport_vif(pb)) {
> -        if (pb->parent_port && pb->parent_port[0]) {
> -            return LP_CONTAINER;
> -        }
> -        return LP_VIF;
> -    } else if (!strcmp(pb->type, "patch")) {
> -        return LP_PATCH;
> -    } else if (!strcmp(pb->type, "chassisredirect")) {
> -        return LP_CHASSISREDIRECT;
> -    } else if (!strcmp(pb->type, "l3gateway")) {
> -        return LP_L3GATEWAY;
> -    } else if (!strcmp(pb->type, "localnet")) {
> -        return LP_LOCALNET;
> -    } else if (!strcmp(pb->type, "localport")) {
> -        return LP_LOCALPORT;
> -    } else if (!strcmp(pb->type, "l2gateway")) {
> -        return LP_L2GATEWAY;
> -    } else if (!strcmp(pb->type, "virtual")) {
> -        return LP_VIRTUAL;
> -    } else if (!strcmp(pb->type, "external")) {
> -        return LP_EXTERNAL;
> -    } else if (!strcmp(pb->type, "remote")) {
> -        return LP_REMOTE;
> -    } else if (!strcmp(pb->type, "vtep")) {
> -        return LP_VTEP;
> -    }
> -
> -    return LP_UNKNOWN;
> -}
> -
>   static char *
>   get_lport_type_str(enum en_lport_type lport_type)
>   {
> @@ -1797,61 +1608,17 @@ add_local_datapath_peer_port(const struct sbrec_port_binding *pb,
>                                struct binding_ctx_out *b_ctx_out,
>                                struct local_datapath *ld)
>   {
> -    const struct sbrec_port_binding *peer;
> -    peer = get_peer_lport(pb, b_ctx_in);
> -
> -    if (!peer) {
> -        return;
> -    }
> -
> -    bool present = false;
> -    for (size_t i = 0; i < ld->n_peer_ports; i++) {
> -        if (ld->peer_ports[i].local == pb) {
> -            present = true;
> -            break;
> -        }
> -    }
> -
> -    if (!present) {
> -        ld->n_peer_ports++;
> -        if (ld->n_peer_ports > ld->n_allocated_peer_ports) {
> -            ld->peer_ports =
> -                x2nrealloc(ld->peer_ports,
> -                           &ld->n_allocated_peer_ports,
> -                           sizeof *ld->peer_ports);
> -        }
> -        ld->peer_ports[ld->n_peer_ports - 1].local = pb;
> -        ld->peer_ports[ld->n_peer_ports - 1].remote = peer;
> -    }
> -
> -    struct local_datapath *peer_ld =
> -        get_local_datapath(b_ctx_out->local_datapaths,
> -                           peer->datapath->tunnel_key);
> -    if (!peer_ld) {
> -        add_local_datapath__(b_ctx_in->sbrec_datapath_binding_by_key,
> -                             b_ctx_in->sbrec_port_binding_by_datapath,
> -                             b_ctx_in->sbrec_port_binding_by_name,
> -                             peer->datapath, false,
> -                             1, b_ctx_out->local_datapaths,
> -                             b_ctx_out->tracked_dp_bindings);
> -        return;
> -    }
> -
> -    for (size_t i = 0; i < peer_ld->n_peer_ports; i++) {
> -        if (peer_ld->peer_ports[i].local == peer) {
> -            return;
> -        }
> -    }
> +    struct local_datpath_added_aux aux = {
> +        .has_local_l3gateway = false,
> +        .tracked_datapaths = b_ctx_out->tracked_dp_bindings,
> +    };
>   
> -    peer_ld->n_peer_ports++;
> -    if (peer_ld->n_peer_ports > peer_ld->n_allocated_peer_ports) {
> -        peer_ld->peer_ports =
> -            x2nrealloc(peer_ld->peer_ports,
> -                        &peer_ld->n_allocated_peer_ports,
> -                        sizeof *peer_ld->peer_ports);
> -    }
> -    peer_ld->peer_ports[peer_ld->n_peer_ports - 1].local = peer;
> -    peer_ld->peer_ports[peer_ld->n_peer_ports - 1].remote = pb;
> +    local_datapath_add_or_update_peer_port(
> +        pb, b_ctx_in->sbrec_datapath_binding_by_key,
> +        b_ctx_in->sbrec_port_binding_by_datapath,
> +        b_ctx_in->sbrec_port_binding_by_name,
> +        ld, b_ctx_out->local_datapaths,
> +        local_datapath_added, &aux);
>   }
>   
>   static void
> @@ -1859,34 +1626,7 @@ remove_local_datapath_peer_port(const struct sbrec_port_binding *pb,
>                                   struct local_datapath *ld,
>                                   struct hmap *local_datapaths)
>   {
> -    size_t i = 0;
> -    for (i = 0; i < ld->n_peer_ports; i++) {
> -        if (ld->peer_ports[i].local == pb) {
> -            break;
> -        }
> -    }
> -
> -    if (i == ld->n_peer_ports) {
> -        return;
> -    }
> -
> -    const struct sbrec_port_binding *peer = ld->peer_ports[i].remote;
> -
> -    /* Possible improvement: We can shrink the allocated peer ports
> -     * if (ld->n_peer_ports < ld->n_allocated_peer_ports / 2).
> -     */
> -    ld->peer_ports[i].local = ld->peer_ports[ld->n_peer_ports - 1].local;
> -    ld->peer_ports[i].remote = ld->peer_ports[ld->n_peer_ports - 1].remote;
> -    ld->n_peer_ports--;
> -
> -    struct local_datapath *peer_ld =
> -        get_local_datapath(local_datapaths, peer->datapath->tunnel_key);
> -    if (peer_ld) {
> -        /* Remove the peer port from the peer datapath. The peer
> -         * datapath also tries to remove its peer lport, but that would
> -         * be no-op. */
> -        remove_local_datapath_peer_port(peer, peer_ld, local_datapaths);
> -    }
> +   local_datapath_remove_peer_port(pb, ld, local_datapaths);
>   }
>   
>   static void
> @@ -1923,7 +1663,8 @@ update_lport_tracking(const struct sbrec_port_binding *pb,
>           return;
>       }
>   
> -    tracked_binding_datapath_lport_add(pb, tracked_dp_bindings);
> +    tracked_datapath_lport_add(pb, TRACKED_RESOURCE_NEW,
> +                               tracked_dp_bindings);
>   }
>   
>   /* Considers the ovs iface 'iface_rec' for claiming.
> @@ -2491,6 +2232,10 @@ delete_done:
>               get_local_datapath(b_ctx_out->local_datapaths,
>                                  pb->datapath->tunnel_key);
>   
> +        if (ld) {
> +            local_datapath_add_lport(ld, pb->logical_port, pb);
> +        }
> +
>           switch (lport_type) {
>           case LP_VIF:
>           case LP_CONTAINER:
> diff --git a/controller/binding.h b/controller/binding.h
> index 8f32894769..e88d335374 100644
> --- a/controller/binding.h
> +++ b/controller/binding.h
> @@ -107,19 +107,6 @@ void local_binding_set_up(struct shash *local_bindings, const char *pb_name,
>   void local_binding_set_down(struct shash *local_bindings, const char *pb_name,
>                               bool sb_readonly, bool ovs_readonly);
>   
> -/* Represents a tracked binding logical port. */
> -struct tracked_binding_lport {
> -    const struct sbrec_port_binding *pb;
> -};
> -
> -/* Represent a tracked binding datapath. */
> -struct tracked_binding_datapath {
> -    struct hmap_node node;
> -    const struct sbrec_datapath_binding *dp;
> -    bool is_new;
> -    struct shash lports; /* shash of struct tracked_binding_lport. */
> -};
> -
>   void binding_register_ovs_idl(struct ovsdb_idl *);
>   void binding_run(struct binding_ctx_in *, struct binding_ctx_out *);
>   bool binding_cleanup(struct ovsdb_idl_txn *ovnsb_idl_txn,
> diff --git a/controller/lflow-generate.c b/controller/lflow-generate.c
> new file mode 100644
> index 0000000000..77d81ea330
> --- /dev/null
> +++ b/controller/lflow-generate.c
> @@ -0,0 +1,179 @@
> +/*
> + * Copyright (c) 2021 Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#include <config.h>
> +
> +/* OVS includes. */
> +#include "lib/hmapx.h"
> +#include "lib/util.h"
> +#include "openvswitch/vlog.h"
> +
> +/* OVN includes. */
> +#include "ldata.h"
> +#include "lflow-generate.h"
> +#include "lib/lflow.h"
> +#include "lib/lb.h"
> +#include "lib/ovn-sb-idl.h"
> +#include "lib/ovn-util.h"
> +
> +VLOG_DEFINE_THIS_MODULE(lflow_gen);
> +
> +static void generate_lflows_for_lport__(struct local_lport *dp_lport);
> +
> +void
> +lflow_generate_run(struct hmap *local_datapaths, struct hmap *local_lbs)
> +{
> +    struct local_datapath *ldp;
> +    HMAP_FOR_EACH (ldp, hmap_node, local_datapaths) {
> +        ovn_ctrl_lflows_build_dp_lflows(ldp->active_lflows, ldp);
> +
> +        struct shash_node *node;
> +        SHASH_FOR_EACH (node, &ldp->lports) {
> +            generate_lflows_for_lport__(node->data);
> +        }
> +    }
> +
> +    struct local_load_balancer *local_lb;
> +    HMAP_FOR_EACH (local_lb, hmap_node, local_lbs) {
> +        lflow_generate_load_balancer_lflows(local_lb);
> +    }
> +}
> +
> +void
> +lflow_generate_datapath_flows(struct local_datapath *ldp,
> +                              bool build_lport_flows)
> +{
> +    local_datapath_switch_lflow_map(ldp);
> +    ovn_ctrl_lflows_build_dp_lflows(ldp->active_lflows, ldp);
> +
> +    if (build_lport_flows) {
> +        struct shash_node *node;
> +        SHASH_FOR_EACH (node, &ldp->lports) {
> +            generate_lflows_for_lport__(node->data);
> +        }
> +    }
> +}
> +
> +void
> +lflow_generate_lport_flows(const struct sbrec_port_binding *pb,
> +                           struct local_datapath *ldp)
> +{
> +    struct local_lport *lport =
> +        local_datapath_get_lport(ldp, pb->logical_port);
> +    if (lport) {
> +        generate_lflows_for_lport__(lport);
> +    } else {
> +        lport = local_datapath_add_lport(ldp, pb->logical_port, pb);
> +        local_lport_update_cache(lport);
> +        ovn_ctrl_build_lport_lflows(lport->active_lflows, lport);
> +    }
> +}
> +
> +void
> +lflow_delete_generated_lport_lflows(const struct sbrec_port_binding *pb,
> +                                    struct local_datapath *ldp)
> +{
> +    struct local_lport *lport =
> +        local_datapath_get_lport(ldp, pb->logical_port);
> +    if (lport) {
> +        local_lport_switch_lflow_map(lport);
> +    }
> +}
> +
> +void
> +lflow_delete_generated_lflows(struct hmap *local_datapaths,
> +                              struct hmap *local_lbs)
> +{
> +    struct local_datapath *ldp;
> +    HMAP_FOR_EACH (ldp, hmap_node, local_datapaths) {
> +        ovn_ctrl_lflows_clear(&ldp->ctrl_lflows[0]);
> +        ovn_ctrl_lflows_clear(&ldp->ctrl_lflows[1]);
> +
> +        struct local_lport *lport;
> +        struct shash_node *node;
> +        SHASH_FOR_EACH (node, &ldp->lports) {
> +            lport = node->data;
> +            ovn_ctrl_lflows_clear(&lport->ctrl_lflows[0]);
> +            ovn_ctrl_lflows_clear(&lport->ctrl_lflows[1]);
> +        }
> +    }
> +
> +    struct local_load_balancer *local_lb;
> +    HMAP_FOR_EACH (local_lb, hmap_node, local_lbs) {
> +        ovn_ctrl_lflows_clear(&local_lb->lswitch_lflows[0]);
> +        ovn_ctrl_lflows_clear(&local_lb->lswitch_lflows[1]);
> +        ovn_ctrl_lflows_clear(&local_lb->lrouter_lflows[0]);
> +        ovn_ctrl_lflows_clear(&local_lb->lrouter_lflows[1]);
> +    }
> +}
> +
> +
> +/* Returns true if the local datapath 'ldp' needs logical flow
> + * generation.  False otherwise.
> + */
> +bool
> +lflow_datapath_needs_generation(struct local_datapath *ldp)
> +{
> +    ovs_assert(ldp->datapath);
> +
> +    /* Right now we check if the datapath options have changed
> +     * from the locally stored value. */
> +    return !smap_equal(&ldp->dp_options, &ldp->datapath->options);
> +}
> +
> +bool
> +lflow_lport_needs_generation(struct local_datapath *ldp,
> +                             const struct sbrec_port_binding *pb)
> +{
> +    struct local_lport *dp_lport = local_datapath_get_lport(
> +        ldp, pb->logical_port);
> +
> +    if (!dp_lport) {
> +        return true;
> +    }
> +
> +    return local_lport_update_cache(dp_lport);
> +}
> +
> +void
> +lflow_generate_load_balancer_lflows(struct local_load_balancer *local_lb)
> +{
> +    ovn_ctrl_build_lb_lflows(local_lb->active_lswitch_lflows,
> +                             local_lb->active_lrouter_lflows,
> +                             local_lb->ovn_lb);
> +}
> +
> +bool
> +lflow_load_balancer_needs_gen(struct local_load_balancer *local_lb)
> +{
> +    local_load_balancer_update(local_lb);
> +    return true;
> +}
> +
> +
> +void
> +lflow_clear_generated_lb_lflows(struct local_load_balancer *local_lb)
> +{
> +    local_load_balancer_switch_lflow_map(local_lb);
> +}
> +
> +static void
> +generate_lflows_for_lport__(struct local_lport *dp_lport)
> +{
> +    local_lport_switch_lflow_map(dp_lport);
> +    local_lport_update_cache(dp_lport);
> +    ovn_ctrl_build_lport_lflows(dp_lport->active_lflows, dp_lport);
> +}
> diff --git a/controller/lflow-generate.h b/controller/lflow-generate.h
> new file mode 100644
> index 0000000000..baaecbf854
> --- /dev/null
> +++ b/controller/lflow-generate.h
> @@ -0,0 +1,49 @@
> +/*
> + * Copyright (c) 2021 Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#ifndef OVN_LFLOW_GENERATE_H
> +#define OVN_LFLOW_GENERATE_H 1
> +
> +struct hmap;
> +struct sbrec_port_binding_table;
> +struct sbrec_datapath_binding_table;
> +struct sbrec_port_binding;
> +
> +void lflow_generate_run(struct hmap *local_datapaths, struct hmap *local_lbs);
> +void lflow_generate_datapath_flows(struct local_datapath *ldp,
> +                                   bool build_lport_flows);
> +void lflow_generate_lport_flows(const struct sbrec_port_binding *pb,
> +                                struct local_datapath *ldp);
> +
> +void lflow_delete_generated_lport_lflows(const struct sbrec_port_binding *,
> +                                         struct local_datapath *);
> +
> +void lflow_delete_generated_lflows(struct hmap *local_datapaths,
> +                                   struct hmap *local_lbs);
> +
> +bool lflow_datapath_needs_generation(struct local_datapath *ldp);
> +bool lflow_lport_needs_generation(struct local_datapath *ldp,
> +                                  const struct sbrec_port_binding *);
> +
> +void lflow_delete_generated_lport_lflows(const struct sbrec_port_binding *,
> +                                         struct local_datapath *);
> +
> +void lflow_generate_load_balancer_lflows(struct local_load_balancer *local_lb);
> +bool lflow_load_balancer_needs_gen(struct local_load_balancer *local_lb);
> +void lflow_clear_generated_lb_lflows(struct local_load_balancer *local_lb);
> +
> +
> +#endif /* controller/lflow-generate.h */
> diff --git a/controller/lflow.c b/controller/lflow.c
> index 34eca135ae..ff511a11f5 100644
> --- a/controller/lflow.c
> +++ b/controller/lflow.c
> @@ -14,27 +14,33 @@
>    */
>   
>   #include <config.h>
> -#include "lflow.h"
> +
> +/* OVS includes. */
>   #include "coverage.h"
> -#include "ha-chassis.h"
> -#include "lflow-cache.h"
> -#include "lport.h"
> -#include "ofctrl.h"
>   #include "openvswitch/dynamic-string.h"
>   #include "openvswitch/ofp-actions.h"
>   #include "openvswitch/ofpbuf.h"
>   #include "openvswitch/vlog.h"
> -#include "ovn-controller.h"
> -#include "ovn/actions.h"
> -#include "ovn/expr.h"
> +#include "lib/ovn-sb-idl.h"
> +#include "lib/packets.h"
> +#include "lib/simap.h"
> +#include "lib/sset.h"
> +
> +/* OVN includes. */
> +#include "include/ovn/actions.h"
> +#include "include/ovn/expr.h"
> +#include "ha-chassis.h"
> +#include "ldata.h"
> +#include "lflow.h"
> +#include "lflow-cache.h"
>   #include "lib/lb.h"
> +#include "lib/lflow.h"
>   #include "lib/ovn-l7.h"
> -#include "lib/ovn-sb-idl.h"
>   #include "lib/extend-table.h"
> -#include "packets.h"
> +#include "lport.h"
> +#include "ofctrl.h"
> +#include "ovn-controller.h"
>   #include "physical.h"
> -#include "simap.h"
> -#include "sset.h"
>   
>   VLOG_DEFINE_THIS_MODULE(lflow);
>   
> @@ -55,7 +61,7 @@ struct lookup_port_aux {
>       struct ovsdb_idl_index *sbrec_multicast_group_by_name_datapath;
>       struct ovsdb_idl_index *sbrec_port_binding_by_name;
>       const struct sbrec_datapath_binding *dp;
> -    const struct sbrec_logical_flow *lflow;
> +    const struct uuid *lflow_uuid;
>       struct lflow_resource_ref *lfrr;
>   };
>   
> @@ -63,19 +69,19 @@ struct condition_aux {
>       struct ovsdb_idl_index *sbrec_port_binding_by_name;
>       const struct sbrec_chassis *chassis;
>       const struct sset *active_tunnels;
> -    const struct sbrec_logical_flow *lflow;
> +    const struct uuid *lflow_uuid;
>       /* Resource reference to store the port name referenced
>        * in is_chassis_resident() to the logical flow. */
>       struct lflow_resource_ref *lfrr;
>   };
>   
>   static bool
> -consider_logical_flow(const struct sbrec_logical_flow *lflow,
> -                      struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
> -                      struct hmap *nd_ra_opts,
> -                      struct controller_event_options *controller_event_opts,
> -                      struct lflow_ctx_in *l_ctx_in,
> -                      struct lflow_ctx_out *l_ctx_out);
> +consider_sb_logical_flow(const struct sbrec_logical_flow *,
> +                         struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
> +                         struct hmap *nd_ra_opts,
> +                         struct controller_event_options *,
> +                         struct lflow_ctx_in *,
> +                        struct lflow_ctx_out *);
>   static void lflow_resource_add(struct lflow_resource_ref *, enum ref_type,
>                                  const char *ref_name, const struct uuid *);
>   static struct ref_lflow_node *ref_lflow_lookup(struct hmap *ref_lflow_table,
> @@ -98,6 +104,10 @@ lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp)
>   
>       const struct lookup_port_aux *aux = aux_;
>   
> +    if (!aux) {
> +        return false;
> +    }
> +
>       const struct sbrec_port_binding *pb
>           = lport_lookup_by_name(aux->sbrec_port_binding_by_name, port_name);
>       if (pb && pb->datapath == aux->dp) {
> @@ -105,15 +115,17 @@ lookup_port_cb(const void *aux_, const char *port_name, unsigned int *portp)
>           return true;
>       }
>   
> -    /* Store the key (DP + name) that used to lookup the multicast group to
> -     * lflow reference, so that in the future when the multicast group's
> -     * existance (found/not found) changes, the logical flow that references
> -     * this multicast group can be reprocessed. */
> -    struct ds mg_key = DS_EMPTY_INITIALIZER;
> -    get_mc_group_key(port_name, aux->dp->tunnel_key, &mg_key);
> -    lflow_resource_add(aux->lfrr, REF_TYPE_MC_GROUP, ds_cstr(&mg_key),
> -                       &aux->lflow->header_.uuid);
> -    ds_destroy(&mg_key);
> +    if (aux->lfrr) {
> +        /* Store the key (DP + name) that used to lookup the multicast group to
> +        * lflow reference, so that in the future when the multicast group's
> +        * existance (found/not found) changes, the logical flow that references
> +        * this multicast group can be reprocessed. */
> +        struct ds mg_key = DS_EMPTY_INITIALIZER;
> +        get_mc_group_key(port_name, aux->dp->tunnel_key, &mg_key);
> +        lflow_resource_add(aux->lfrr, REF_TYPE_MC_GROUP, ds_cstr(&mg_key),
> +                           aux->lflow_uuid);
> +        ds_destroy(&mg_key);
> +    }
>   
>       const struct sbrec_multicast_group *mg = mcgroup_lookup_by_dp_name(
>           aux->sbrec_multicast_group_by_name_datapath, aux->dp, port_name);
> @@ -131,6 +143,10 @@ tunnel_ofport_cb(const void *aux_, const char *port_name, ofp_port_t *ofport)
>   {
>       const struct lookup_port_aux *aux = aux_;
>   
> +    if (!aux) {
> +        return false;
> +    }
> +
>       const struct sbrec_port_binding *pb
>           = lport_lookup_by_name(aux->sbrec_port_binding_by_name, port_name);
>       if (!pb || (pb->datapath != aux->dp) || !pb->chassis) {
> @@ -155,12 +171,14 @@ is_chassis_resident_cb(const void *c_aux_, const char *port_name)
>           return false;
>       }
>   
> -    /* Store the port_name to lflow reference. */
> -    int64_t dp_id = pb->datapath->tunnel_key;
> -    char buf[16];
> -    get_unique_lport_key(dp_id, pb->tunnel_key, buf, sizeof(buf));
> -    lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf,
> -                       &c_aux->lflow->header_.uuid);
> +    if (c_aux->lfrr) {
> +        /* Store the port_name to lflow reference. */
> +        int64_t dp_id = pb->datapath->tunnel_key;
> +        char buf[16];
> +        get_unique_lport_key(dp_id, pb->tunnel_key, buf, sizeof(buf));
> +        lflow_resource_add(c_aux->lfrr, REF_TYPE_PORTBINDING, buf,
> +                           c_aux->lflow_uuid);
> +    }
>   
>       if (strcmp(pb->type, "chassisredirect")) {
>           /* for non-chassisredirect ports */
> @@ -355,9 +373,9 @@ add_logical_flows(struct lflow_ctx_in *l_ctx_in,
>       controller_event_opts_init(&controller_event_opts);
>   
>       SBREC_LOGICAL_FLOW_TABLE_FOR_EACH (lflow, l_ctx_in->logical_flow_table) {
> -        if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> -                                   &nd_ra_opts, &controller_event_opts,
> -                                   l_ctx_in, l_ctx_out)) {
> +        if (!consider_sb_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> +                                      &nd_ra_opts, &controller_event_opts,
> +                                      l_ctx_in, l_ctx_out)) {
>               static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5);
>               VLOG_ERR_RL(&rl, "Conjunction id overflow when processing lflow "
>                           UUID_FMT, UUID_ARGS(&lflow->header_.uuid));
> @@ -430,9 +448,9 @@ lflow_handle_changed_flows(struct lflow_ctx_in *l_ctx_in,
>           if (lflow) {
>               VLOG_DBG("re-add lflow "UUID_FMT,
>                        UUID_ARGS(&lflow->header_.uuid));
> -            if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> -                                       &nd_ra_opts, &controller_event_opts,
> -                                       l_ctx_in, l_ctx_out)) {
> +            if (!consider_sb_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> +                                          &nd_ra_opts, &controller_event_opts,
> +                                          l_ctx_in, l_ctx_out)) {
>                   ret = false;
>                   break;
>               }
> @@ -530,9 +548,9 @@ lflow_handle_changed_ref(enum ref_type ref_type, const char *ref_name,
>               continue;
>           }
>   
> -        if (!consider_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> -                                   &nd_ra_opts, &controller_event_opts,
> -                                   l_ctx_in, l_ctx_out)) {
> +        if (!consider_sb_logical_flow(lflow, &dhcp_opts, &dhcpv6_opts,
> +                                      &nd_ra_opts, &controller_event_opts,
> +                                      l_ctx_in, l_ctx_out)) {
>               ret = false;
>               l_ctx_out->conj_id_overflow = true;
>               break;
> @@ -570,48 +588,15 @@ update_conj_id_ofs(uint32_t *conj_id_ofs, uint32_t n_conjs)
>   }
>   
>   static void
> -add_matches_to_flow_table(const struct sbrec_logical_flow *lflow,
> -                          const struct sbrec_datapath_binding *dp,
> -                          struct hmap *matches, uint8_t ptable,
> -                          uint8_t output_ptable, struct ofpbuf *ovnacts,
> -                          bool ingress, struct lflow_ctx_in *l_ctx_in,
> -                          struct lflow_ctx_out *l_ctx_out)
> +add_matches_to_flow_table__(struct hmap *matches, uint8_t ptable, bool ingress,
> +                            const struct sbrec_datapath_binding *dp,
> +                            const struct uuid *lflow_uuid,
> +                            uint16_t lflow_priority,
> +                            const struct sset *local_lport_ids,
> +                            struct ofpbuf *ofpacts,
> +                            struct lflow_resource_ref *lfrr,
> +                            struct ovn_desired_flow_table *flow_table)
>   {
> -    struct lookup_port_aux aux = {
> -        .sbrec_multicast_group_by_name_datapath
> -            = l_ctx_in->sbrec_multicast_group_by_name_datapath,
> -        .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name,
> -        .dp = dp,
> -        .lflow = lflow,
> -        .lfrr = l_ctx_out->lfrr,
> -    };
> -
> -    /* Encode OVN logical actions into OpenFlow. */
> -    uint64_t ofpacts_stub[1024 / 8];
> -    struct ofpbuf ofpacts = OFPBUF_STUB_INITIALIZER(ofpacts_stub);
> -    struct ovnact_encode_params ep = {
> -        .lookup_port = lookup_port_cb,
> -        .tunnel_ofport = tunnel_ofport_cb,
> -        .aux = &aux,
> -        .is_switch = datapath_is_switch(dp),
> -        .group_table = l_ctx_out->group_table,
> -        .meter_table = l_ctx_out->meter_table,
> -        .lflow_uuid = lflow->header_.uuid,
> -
> -        .pipeline = ingress ? OVNACT_P_INGRESS : OVNACT_P_EGRESS,
> -        .ingress_ptable = OFTABLE_LOG_INGRESS_PIPELINE,
> -        .egress_ptable = OFTABLE_LOG_EGRESS_PIPELINE,
> -        .output_ptable = output_ptable,
> -        .mac_bind_ptable = OFTABLE_MAC_BINDING,
> -        .mac_lookup_ptable = OFTABLE_MAC_LOOKUP,
> -        .lb_hairpin_ptable = OFTABLE_CHK_LB_HAIRPIN,
> -        .lb_hairpin_reply_ptable = OFTABLE_CHK_LB_HAIRPIN_REPLY,
> -        .ct_snat_vip_ptable = OFTABLE_CT_SNAT_FOR_VIP,
> -        .fdb_ptable = OFTABLE_GET_FDB,
> -        .fdb_lookup_ptable = OFTABLE_LOOKUP_FDB,
> -    };
> -    ovnacts_encode(ovnacts->data, ovnacts->size, &ep, &ofpacts);
> -
>       struct expr_match *m;
>       HMAP_FOR_EACH (m, hmap_node, matches) {
>           match_set_metadata(&m->match, htonll(dp->tunnel_key));
> @@ -623,21 +608,23 @@ add_matches_to_flow_table(const struct sbrec_logical_flow *lflow,
>                   int64_t dp_id = dp->tunnel_key;
>                   char buf[16];
>                   get_unique_lport_key(dp_id, port_id, buf, sizeof(buf));
> -                lflow_resource_add(l_ctx_out->lfrr, REF_TYPE_PORTBINDING, buf,
> -                                   &lflow->header_.uuid);
> -                if (!sset_contains(l_ctx_in->local_lport_ids, buf)) {
> +                if (lfrr) {
> +                    lflow_resource_add(lfrr, REF_TYPE_PORTBINDING, buf,
> +                                       lflow_uuid);
> +                }
> +                if (!sset_contains(local_lport_ids, buf)) {
>                       VLOG_DBG("lflow "UUID_FMT
>                                " port %s in match is not local, skip",
> -                             UUID_ARGS(&lflow->header_.uuid),
> +                             UUID_ARGS(lflow_uuid),
>                                buf);
>                       continue;
>                   }
>               }
>           }
>           if (!m->n) {
> -            ofctrl_add_flow(l_ctx_out->flow_table, ptable, lflow->priority,
> -                            lflow->header_.uuid.parts[0], &m->match, &ofpacts,
> -                            &lflow->header_.uuid);
> +            ofctrl_add_flow(flow_table, ptable, lflow_priority,
> +                            lflow_uuid->parts[0], &m->match, ofpacts,
> +                            lflow_uuid);
>           } else {
>               uint64_t conj_stubs[64 / 8];
>               struct ofpbuf conj;
> @@ -653,13 +640,60 @@ add_matches_to_flow_table(const struct sbrec_logical_flow *lflow,
>                   dst->n_clauses = src->n_clauses;
>               }
>   
> -            ofctrl_add_or_append_flow(l_ctx_out->flow_table, ptable,
> -                                      lflow->priority, 0,
> -                                      &m->match, &conj, &lflow->header_.uuid);
> +            ofctrl_add_or_append_flow(flow_table, ptable,
> +                                      lflow_priority, 0,
> +                                      &m->match, &conj, lflow_uuid);
>               ofpbuf_uninit(&conj);
>           }
>       }
> +}
>   
> +static void
> +add_matches_to_flow_table(const struct ovn_ctrl_lflow *lflow,
> +                          const struct sbrec_datapath_binding *dp,
> +                          struct hmap *matches,
> +                          uint8_t ptable, uint8_t output_ptable,
> +                          struct ofpbuf *ovnacts,
> +                          bool ingress, struct lflow_ctx_in *l_ctx_in,
> +                          struct lflow_ctx_out *l_ctx_out)
> +{
> +    struct lookup_port_aux aux = {
> +        .sbrec_multicast_group_by_name_datapath
> +            = l_ctx_in->sbrec_multicast_group_by_name_datapath,
> +        .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name,
> +        .dp = dp,
> +    };
> +
> +    /* Encode OVN logical actions into OpenFlow. */
> +    uint64_t ofpacts_stub[1024 / 8];
> +    struct ofpbuf ofpacts = OFPBUF_STUB_INITIALIZER(ofpacts_stub);
> +    struct ovnact_encode_params ep = {
> +        .lookup_port = lookup_port_cb,
> +        .tunnel_ofport = tunnel_ofport_cb,
> +        .aux = &aux,
> +        .is_switch = datapath_is_switch(dp),
> +        .group_table = l_ctx_out->group_table,
> +        .meter_table = l_ctx_out->meter_table,
> +        .lflow_uuid = lflow->uuid_,
> +
> +        .pipeline = ingress ? OVNACT_P_INGRESS : OVNACT_P_EGRESS,
> +        .ingress_ptable = OFTABLE_LOG_INGRESS_PIPELINE,
> +        .egress_ptable = OFTABLE_LOG_EGRESS_PIPELINE,
> +        .output_ptable = output_ptable,
> +        .mac_bind_ptable = OFTABLE_MAC_BINDING,
> +        .mac_lookup_ptable = OFTABLE_MAC_LOOKUP,
> +        .lb_hairpin_ptable = OFTABLE_CHK_LB_HAIRPIN,
> +        .lb_hairpin_reply_ptable = OFTABLE_CHK_LB_HAIRPIN_REPLY,
> +        .ct_snat_vip_ptable = OFTABLE_CT_SNAT_FOR_VIP,
> +        .fdb_ptable = OFTABLE_GET_FDB,
> +        .fdb_lookup_ptable = OFTABLE_LOOKUP_FDB,
> +    };
> +    ovnacts_encode(ovnacts->data, ovnacts->size, &ep, &ofpacts);
> +
> +    add_matches_to_flow_table__(matches, ptable, ingress, dp,
> +                                &lflow->uuid_, lflow->priority,
> +                                l_ctx_in->local_lport_ids, &ofpacts,
> +                                l_ctx_out->lfrr, l_ctx_out->flow_table);
>       ofpbuf_uninit(&ofpacts);
>   }
>   
> @@ -669,8 +703,9 @@ add_matches_to_flow_table(const struct sbrec_logical_flow *lflow,
>    * If parsing is successful, '*prereqs' is also consumed.
>    */
>   static struct expr *
> -convert_match_to_expr(const struct sbrec_logical_flow *lflow,
> -                      const struct sbrec_datapath_binding *dp,
> +convert_match_to_expr(char *lflow_match,
> +                      const struct uuid *lflow_uuid,
> +                      int64_t dp_id,
>                         struct expr **prereqs,
>                         const struct shash *addr_sets,
>                         const struct shash *port_groups,
> @@ -681,19 +716,22 @@ convert_match_to_expr(const struct sbrec_logical_flow *lflow,
>       struct sset port_groups_ref = SSET_INITIALIZER(&port_groups_ref);
>       char *error = NULL;
>   
> -    struct expr *e = expr_parse_string(lflow->match, &symtab, addr_sets,
> +    struct expr *e = expr_parse_string(lflow_match, &symtab, addr_sets,
>                                          port_groups, &addr_sets_ref,
> -                                       &port_groups_ref, dp->tunnel_key,
> +                                       &port_groups_ref, dp_id,
>                                          &error);
> -    const char *addr_set_name;
> -    SSET_FOR_EACH (addr_set_name, &addr_sets_ref) {
> -        lflow_resource_add(lfrr, REF_TYPE_ADDRSET, addr_set_name,
> -                           &lflow->header_.uuid);
> -    }
> -    const char *port_group_name;
> -    SSET_FOR_EACH (port_group_name, &port_groups_ref) {
> -        lflow_resource_add(lfrr, REF_TYPE_PORTGROUP, port_group_name,
> -                           &lflow->header_.uuid);
> +
> +    if (lflow_uuid) {
> +        const char *addr_set_name;
> +        SSET_FOR_EACH (addr_set_name, &addr_sets_ref) {
> +            lflow_resource_add(lfrr, REF_TYPE_ADDRSET, addr_set_name,
> +                               lflow_uuid);
> +        }
> +        const char *port_group_name;
> +        SSET_FOR_EACH (port_group_name, &port_groups_ref) {
> +            lflow_resource_add(lfrr, REF_TYPE_PORTGROUP, port_group_name,
> +                               lflow_uuid);
> +        }
>       }
>   
>       if (pg_addr_set_ref) {
> @@ -713,7 +751,7 @@ convert_match_to_expr(const struct sbrec_logical_flow *lflow,
>       if (error) {
>           static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
>           VLOG_WARN_RL(&rl, "error parsing match \"%s\": %s",
> -                    lflow->match, error);
> +                     lflow_match, error);
>           free(error);
>           return NULL;
>       }
> @@ -722,28 +760,30 @@ convert_match_to_expr(const struct sbrec_logical_flow *lflow,
>   }
>   
>   static bool
> -consider_logical_flow__(const struct sbrec_logical_flow *lflow,
> -                        const struct sbrec_datapath_binding *dp,
> +consider_logical_flow__(const struct ovn_ctrl_lflow *lflow,
> +                        bool ingress, uint8_t table_id,
> +                        uint32_t dp_key,
>                           struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
>                           struct hmap *nd_ra_opts,
>                           struct controller_event_options *controller_event_opts,
>                           struct lflow_ctx_in *l_ctx_in,
>                           struct lflow_ctx_out *l_ctx_out)
>   {
> -    if (!get_local_datapath(l_ctx_in->local_datapaths, dp->tunnel_key)) {
> +    struct local_datapath *ldp = get_local_datapath(l_ctx_in->local_datapaths,
> +                                                    dp_key);
> +    if (!ldp) {
>           VLOG_DBG("lflow "UUID_FMT" is not for local datapath, skip",
> -                 UUID_ARGS(&lflow->header_.uuid));
> +                 UUID_ARGS(&lflow->uuid_));
>           return true;
>       }
>   
> -    /* Determine translation of logical table IDs to physical table IDs. */
> -    bool ingress = !strcmp(lflow->pipeline, "ingress");
> +    const struct sbrec_datapath_binding *dp = ldp->datapath;
>   
>       /* Determine translation of logical table IDs to physical table IDs. */
>       uint8_t first_ptable = (ingress
>                               ? OFTABLE_LOG_INGRESS_PIPELINE
>                               : OFTABLE_LOG_EGRESS_PIPELINE);
> -    uint8_t ptable = first_ptable + lflow->table_id;
> +    uint8_t ptable = first_ptable + table_id;
>       uint8_t output_ptable = (ingress
>                                ? OFTABLE_REMOTE_OUTPUT
>                                : OFTABLE_SAVE_INPORT);
> @@ -762,7 +802,7 @@ consider_logical_flow__(const struct sbrec_logical_flow *lflow,
>   
>           .pipeline = ingress ? OVNACT_P_INGRESS : OVNACT_P_EGRESS,
>           .n_tables = LOG_PIPELINE_LEN,
> -        .cur_ltable = lflow->table_id,
> +        .cur_ltable = table_id,
>       };
>       struct expr *prereqs = NULL;
>       char *error;
> @@ -783,19 +823,19 @@ consider_logical_flow__(const struct sbrec_logical_flow *lflow,
>               = l_ctx_in->sbrec_multicast_group_by_name_datapath,
>           .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name,
>           .dp = dp,
> -        .lflow = lflow,
> +        .lflow_uuid = &lflow->uuid_,
>           .lfrr = l_ctx_out->lfrr,
>       };
>       struct condition_aux cond_aux = {
>           .sbrec_port_binding_by_name = l_ctx_in->sbrec_port_binding_by_name,
>           .chassis = l_ctx_in->chassis,
>           .active_tunnels = l_ctx_in->active_tunnels,
> -        .lflow = lflow,
> +        .lflow_uuid = &lflow->uuid_,
>           .lfrr = l_ctx_out->lfrr,
>       };
>   
>       struct lflow_cache_value *lcv =
> -        lflow_cache_get(l_ctx_out->lflow_cache, &lflow->header_.uuid);
> +        lflow_cache_get(l_ctx_out->lflow_cache, &lflow->uuid_);
>       uint32_t conj_id_ofs =
>           lcv ? lcv->conj_id_ofs : *l_ctx_out->conj_id_ofs;
>       enum lflow_cache_type lcv_type =
> @@ -815,7 +855,9 @@ consider_logical_flow__(const struct sbrec_logical_flow *lflow,
>       switch (lcv_type) {
>       case LCACHE_T_NONE:
>       case LCACHE_T_CONJ_ID:
> -        expr = convert_match_to_expr(lflow, dp, &prereqs, l_ctx_in->addr_sets,
> +        expr = convert_match_to_expr(lflow->match, &lflow->uuid_,
> +                                     dp->tunnel_key, &prereqs,
> +                                     l_ctx_in->addr_sets,
>                                        l_ctx_in->port_groups, l_ctx_out->lfrr,
>                                        &pg_addr_set_ref);
>           if (!expr) {
> @@ -861,7 +903,7 @@ consider_logical_flow__(const struct sbrec_logical_flow *lflow,
>           matches_size = expr_matches_prepare(matches, conj_id_ofs);
>           if (hmap_is_empty(matches)) {
>               VLOG_DBG("lflow "UUID_FMT" matches are empty, skip",
> -                    UUID_ARGS(&lflow->header_.uuid));
> +                    UUID_ARGS(&lflow->uuid_));
>               goto done;
>           }
>           break;
> @@ -885,17 +927,17 @@ consider_logical_flow__(const struct sbrec_logical_flow *lflow,
>           if (lflow_cache_is_enabled(l_ctx_out->lflow_cache)) {
>               if (cached_expr && !is_cr_cond_present) {
>                   lflow_cache_add_matches(l_ctx_out->lflow_cache,
> -                                        &lflow->header_.uuid, matches,
> +                                        &lflow->uuid_, matches,
>                                           matches_size);
>                   matches = NULL;
>               } else if (cached_expr) {
>                   lflow_cache_add_expr(l_ctx_out->lflow_cache,
> -                                     &lflow->header_.uuid, conj_id_ofs,
> +                                     &lflow->uuid_, conj_id_ofs,
>                                        cached_expr, expr_size(cached_expr));
>                   cached_expr = NULL;
>               } else if (n_conjs) {
>                   lflow_cache_add_conj_id(l_ctx_out->lflow_cache,
> -                                        &lflow->header_.uuid, conj_id_ofs);
> +                                        &lflow->uuid_, conj_id_ofs);
>               }
>           }
>           break;
> @@ -920,12 +962,33 @@ done:
>   }
>   
>   static bool
> -consider_logical_flow(const struct sbrec_logical_flow *lflow,
> -                      struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
> -                      struct hmap *nd_ra_opts,
> -                      struct controller_event_options *controller_event_opts,
> -                      struct lflow_ctx_in *l_ctx_in,
> -                      struct lflow_ctx_out *l_ctx_out)
> +consider_sb_logical_flow__(const struct sbrec_logical_flow *lflow,
> +                           const struct sbrec_datapath_binding *dp,
> +                           struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
> +                           struct hmap *nd_ra_opts,
> +                           struct controller_event_options *event_opts,
> +                           struct lflow_ctx_in *l_ctx_in,
> +                           struct lflow_ctx_out *l_ctx_out)
> +{
> +    struct ovn_ctrl_lflow ctrl_lflow;
> +    ctrl_lflow.uuid_ = lflow->header_.uuid;
> +    ctrl_lflow.match = lflow->match;
> +    ctrl_lflow.actions = lflow->actions;
> +    ctrl_lflow.priority = lflow->priority;
> +
> +    bool ingress = !strcmp(lflow->pipeline, "ingress");
> +    return consider_logical_flow__(&ctrl_lflow, ingress, lflow->table_id,
> +                                   dp->tunnel_key, dhcp_opts, dhcpv6_opts, nd_ra_opts,
> +                                   event_opts, l_ctx_in, l_ctx_out);
> +}
> +
> +static bool
> +consider_sb_logical_flow(const struct sbrec_logical_flow *lflow,
> +                         struct hmap *dhcp_opts, struct hmap *dhcpv6_opts,
> +                         struct hmap *nd_ra_opts,
> +                         struct controller_event_options *event_opts,
> +                         struct lflow_ctx_in *l_ctx_in,
> +                         struct lflow_ctx_out *l_ctx_out)
>   {
>       const struct sbrec_logical_dp_group *dp_group = lflow->logical_dp_group;
>       const struct sbrec_datapath_binding *dp = lflow->logical_datapath;
> @@ -938,17 +1001,15 @@ consider_logical_flow(const struct sbrec_logical_flow *lflow,
>       }
>       ovs_assert(!dp_group || !dp);
>   
> -    if (dp && !consider_logical_flow__(lflow, dp,
> -                                       dhcp_opts, dhcpv6_opts, nd_ra_opts,
> -                                       controller_event_opts,
> -                                       l_ctx_in, l_ctx_out)) {
> +    if (dp && !consider_sb_logical_flow__(lflow, dp,
> +                                          dhcp_opts, dhcpv6_opts, nd_ra_opts,
> +                                          event_opts, l_ctx_in, l_ctx_out)) {
>           ret = false;
>       }
>       for (size_t i = 0; dp_group && i < dp_group->n_datapaths; i++) {
> -        if (!consider_logical_flow__(lflow, dp_group->datapaths[i],
> -                                     dhcp_opts,  dhcpv6_opts, nd_ra_opts,
> -                                     controller_event_opts,
> -                                     l_ctx_in, l_ctx_out)) {
> +        if (!consider_sb_logical_flow__(lflow, dp_group->datapaths[i],
> +                                        dhcp_opts,  dhcpv6_opts, nd_ra_opts,
> +                                        event_opts, l_ctx_in, l_ctx_out)) {
>               ret = false;
>           }
>       }
> @@ -1688,9 +1749,9 @@ lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *dp,
>       const struct sbrec_logical_flow *lflow;
>       SBREC_LOGICAL_FLOW_FOR_EACH_EQUAL (
>           lflow, lf_row, l_ctx_in->sbrec_logical_flow_by_logical_datapath) {
> -        if (!consider_logical_flow__(lflow, dp, &dhcp_opts, &dhcpv6_opts,
> -                                     &nd_ra_opts, &controller_event_opts,
> -                                     l_ctx_in, l_ctx_out)) {
> +        if (!consider_sb_logical_flow__(lflow, dp, &dhcp_opts, &dhcpv6_opts,
> +                                        &nd_ra_opts, &controller_event_opts,
> +                                        l_ctx_in, l_ctx_out)) {
>               handled = false;
>               l_ctx_out->conj_id_overflow = true;
>               goto lflow_processing_end;
> @@ -1718,9 +1779,10 @@ lflow_add_flows_for_datapath(const struct sbrec_datapath_binding *dp,
>           sbrec_logical_flow_index_set_logical_dp_group(lf_row, ldpg);
>           SBREC_LOGICAL_FLOW_FOR_EACH_EQUAL (
>               lflow, lf_row, l_ctx_in->sbrec_logical_flow_by_logical_dp_group) {
> -            if (!consider_logical_flow__(lflow, dp, &dhcp_opts, &dhcpv6_opts,
> -                                         &nd_ra_opts, &controller_event_opts,
> -                                         l_ctx_in, l_ctx_out)) {
> +            if (!consider_sb_logical_flow__(lflow, dp, &dhcp_opts,
> +                                            &dhcpv6_opts, &nd_ra_opts,
> +                                            &controller_event_opts,
> +                                            l_ctx_in, l_ctx_out)) {
>                   handled = false;
>                   l_ctx_out->conj_id_overflow = true;
>                   goto lflow_processing_end;
> @@ -1853,3 +1915,72 @@ lflow_handle_changed_fdbs(struct lflow_ctx_in *l_ctx_in,
>   
>       return true;
>   }
> +
> +void
> +lflow_process_ctrl_lflows(struct hmap *ctrl_lflows,
> +                          const struct sbrec_datapath_binding *dp,
> +                          struct lflow_ctx_in *l_ctx_in,
> +                          struct lflow_ctx_out *l_ctx_out)
> +{
> +    struct hmap dhcp_opts = HMAP_INITIALIZER(&dhcp_opts);
> +    struct hmap dhcpv6_opts = HMAP_INITIALIZER(&dhcpv6_opts);
> +    const struct sbrec_dhcp_options *dhcp_opt_row;
> +    SBREC_DHCP_OPTIONS_TABLE_FOR_EACH (dhcp_opt_row,
> +                                       l_ctx_in->dhcp_options_table) {
> +        dhcp_opt_add(&dhcp_opts, dhcp_opt_row->name, dhcp_opt_row->code,
> +                     dhcp_opt_row->type);
> +    }
> +
> +
> +    const struct sbrec_dhcpv6_options *dhcpv6_opt_row;
> +    SBREC_DHCPV6_OPTIONS_TABLE_FOR_EACH (dhcpv6_opt_row,
> +                                         l_ctx_in->dhcpv6_options_table) {
> +       dhcp_opt_add(&dhcpv6_opts, dhcpv6_opt_row->name, dhcpv6_opt_row->code,
> +                    dhcpv6_opt_row->type);
> +    }
> +
> +    struct hmap nd_ra_opts = HMAP_INITIALIZER(&nd_ra_opts);
> +    nd_ra_opts_init(&nd_ra_opts);
> +
> +    struct controller_event_options controller_event_opts;
> +    controller_event_opts_init(&controller_event_opts);
> +
> +
> +    struct ovn_ctrl_lflow *ctrl_lflow;
> +
> +    HMAP_FOR_EACH (ctrl_lflow, hmap_node, ctrl_lflows) {
> +        bool ingress = (ovn_stage_get_pipeline(ctrl_lflow->stage) == P_IN);
> +        uint8_t table_id = ovn_stage_get_table(ctrl_lflow->stage);
> +        uint32_t dp_key =
> +            ctrl_lflow->dp_key ? ctrl_lflow->dp_key : dp->tunnel_key;
> +        consider_logical_flow__(ctrl_lflow, ingress, table_id, dp_key,
> +                                 &dhcp_opts, &dhcpv6_opts,
> +                                 &nd_ra_opts, &controller_event_opts,
> +                                 l_ctx_in, l_ctx_out);
> +    }
> +
> +    dhcp_opts_destroy(&dhcp_opts);
> +    dhcp_opts_destroy(&dhcpv6_opts);
> +    nd_ra_opts_destroy(&nd_ra_opts);
> +    controller_event_opts_destroy(&controller_event_opts);
> +}
> +
> +void
> +lflow_remove_ctrl_lflows(struct hmap *ctrl_lflows,
> +                         struct ovn_desired_flow_table *flow_table)
> +{
> +    struct hmap flood_remove_nodes = HMAP_INITIALIZER(&flood_remove_nodes);
> +
> +    struct ovn_ctrl_lflow *ctrl_lflow;
> +    HMAP_FOR_EACH (ctrl_lflow, hmap_node, ctrl_lflows) {
> +        ofctrl_flood_remove_add_node(&flood_remove_nodes, &ctrl_lflow->uuid_);
> +    }
> +
> +    ofctrl_flood_remove_flows(flow_table, &flood_remove_nodes);
> +
> +    struct ofctrl_flood_remove_node *ofrn;
> +    HMAP_FOR_EACH_POP (ofrn, hmap_node, &flood_remove_nodes) {
> +        free(ofrn);
> +    }
> +    hmap_destroy(&flood_remove_nodes);
> +}
> diff --git a/controller/lflow.h b/controller/lflow.h
> index e98edf81df..0c73d3e6d0 100644
> --- a/controller/lflow.h
> +++ b/controller/lflow.h
> @@ -54,6 +54,7 @@ struct sbrec_port_binding;
>   struct simap;
>   struct sset;
>   struct uuid;
> +struct ovn_ctrl_lflow;
>   
>   /* OpenFlow table numbers.
>    *
> @@ -182,4 +183,11 @@ bool lflow_handle_flows_for_lport(const struct sbrec_port_binding *,
>                                     struct lflow_ctx_out *);
>   bool lflow_handle_changed_mc_groups(struct lflow_ctx_in *,
>                                       struct lflow_ctx_out *);
> +
> +void lflow_process_ctrl_lflows(struct hmap *ctrl_lflows,
> +                               const struct sbrec_datapath_binding *,
> +                               struct lflow_ctx_in *,
> +                               struct lflow_ctx_out *);
> +void lflow_remove_ctrl_lflows(struct hmap *ctrl_lflows,
> +                              struct ovn_desired_flow_table *);
>   #endif /* controller/lflow.h */
> diff --git a/controller/lport.c b/controller/lport.c
> index 478fcfd829..962f07ebb4 100644
> --- a/controller/lport.c
> +++ b/controller/lport.c
> @@ -23,22 +23,6 @@
>   #include "lib/ovn-sb-idl.h"
>   VLOG_DEFINE_THIS_MODULE(lport);
>   
> -const struct sbrec_port_binding *
> -lport_lookup_by_name(struct ovsdb_idl_index *sbrec_port_binding_by_name,
> -                     const char *name)
> -{
> -    struct sbrec_port_binding *pb = sbrec_port_binding_index_init_row(
> -        sbrec_port_binding_by_name);
> -    sbrec_port_binding_index_set_logical_port(pb, name);
> -
> -    const struct sbrec_port_binding *retval = sbrec_port_binding_index_find(
> -        sbrec_port_binding_by_name, pb);
> -
> -    sbrec_port_binding_index_destroy_row(pb);
> -
> -    return retval;
> -}
> -
>   const struct sbrec_port_binding *
>   lport_lookup_by_key(struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
>                       struct ovsdb_idl_index *sbrec_port_binding_by_key,
> diff --git a/controller/lport.h b/controller/lport.h
> index 345efc1840..76e9c42460 100644
> --- a/controller/lport.h
> +++ b/controller/lport.h
> @@ -34,10 +34,6 @@ struct sset;
>    * instead we define our own indexes.
>    */
>   
> -const struct sbrec_port_binding *lport_lookup_by_name(
> -    struct ovsdb_idl_index *sbrec_port_binding_by_name,
> -    const char *name);
> -
>   const struct sbrec_port_binding *lport_lookup_by_key(
>       struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
>       struct ovsdb_idl_index *sbrec_port_binding_by_key,
> diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
> index 3968ef0597..b5478f3ded 100644
> --- a/controller/ovn-controller.c
> +++ b/controller/ovn-controller.c
> @@ -36,6 +36,7 @@
>   #include "if-status.h"
>   #include "ip-mcast.h"
>   #include "openvswitch/hmap.h"
> +#include "ldata.h"
>   #include "lflow.h"
>   #include "lflow-cache.h"
>   #include "lib/vswitch-idl.h"
> @@ -47,9 +48,12 @@
>   #include "openvswitch/vlog.h"
>   #include "ovn/actions.h"
>   #include "ovn/features.h"
> +#include "lflow-generate.h"
>   #include "lib/chassis-index.h"
>   #include "lib/extend-table.h"
>   #include "lib/ip-mcast-index.h"
> +#include "lib/lb.h"
> +#include "lib/lflow.h"
>   #include "lib/mcast-group-index.h"
>   #include "lib/ovn-sb-idl.h"
>   #include "lib/ovn-util.h"
> @@ -124,15 +128,6 @@ struct pending_pkt {
>   /* Registered ofctrl seqno type for nb_cfg propagation. */
>   static size_t ofctrl_seq_type_nb_cfg;
>   
> -struct local_datapath *
> -get_local_datapath(const struct hmap *local_datapaths, uint32_t tunnel_key)
> -{
> -    struct hmap_node *node = hmap_first_with_hash(local_datapaths, tunnel_key);
> -    return (node
> -            ? CONTAINER_OF(node, struct local_datapath, hmap_node)
> -            : NULL);
> -}
> -
>   uint32_t
>   get_tunnel_type(const char *name)
>   {
> @@ -1020,6 +1015,9 @@ struct ed_type_runtime_data {
>       struct sset egress_ifaces;
>       struct smap local_iface_ids;
>   
> +    /* Load balancer data - hmap of 'struct local_load_balancer'. */
> +    struct hmap local_load_balancers;
> +
>       /* Tracked data. See below for more details and comments. */
>       bool tracked;
>       bool local_lports_changed;
> @@ -1033,14 +1031,14 @@ struct ed_type_runtime_data {
>    *
>    *  ------------------------------------------------------------------------
>    * |                      | This is a hmap of                               |
> - * |                      | 'struct tracked_binding_datapath' defined in    |
> + * |                      | 'struct tracked_datapath' defined in            |
>    * |                      | binding.h. Runtime data handlers for OVS        |
>    * |                      | Interface and Port Binding changes store the    |
>    * | @tracked_dp_bindings | changed datapaths (datapaths added/removed from |
>    * |                      | local_datapaths) and changed port bindings      |
> - * |                      | (added/updated/deleted in 'lbinding_data').    |
> + * |                      | (added/updated/deleted in 'lbinding_data').     |
>    * |                      | So any changes to the runtime data -            |
> - * |                      | local_datapaths and lbinding_data is captured  |
> + * |                      | local_datapaths and lbinding_data is captured   |
>    * |                      | here.                                           |
>    *  ------------------------------------------------------------------------
>    * |                      | This is a bool which represents if the runtime  |
> @@ -1067,7 +1065,7 @@ struct ed_type_runtime_data {
>    *
>    *  ---------------------------------------------------------------------
>    * | local_datapaths  | The changes to these runtime data is captured in |
> - * | lbinding_data   | the @tracked_dp_bindings indirectly and hence it |
> + * | lbinding_data   | the @tracked_dp_bindings indirectly and hence it  |
>    * | local_lport_ids  | is not tracked explicitly.                       |
>    *  ---------------------------------------------------------------------
>    * | local_iface_ids  | This is used internally within the runtime data  |
> @@ -1092,7 +1090,7 @@ en_runtime_data_clear_tracked_data(void *data_)
>   {
>       struct ed_type_runtime_data *data = data_;
>   
> -    binding_tracked_dp_destroy(&data->tracked_dp_bindings);
> +    tracked_datapaths_destroy(&data->tracked_dp_bindings);
>       hmap_init(&data->tracked_dp_bindings);
>       data->local_lports_changed = false;
>       data->tracked = false;
> @@ -1111,6 +1109,7 @@ en_runtime_data_init(struct engine_node *node OVS_UNUSED,
>       sset_init(&data->egress_ifaces);
>       smap_init(&data->local_iface_ids);
>       local_binding_data_init(&data->lbinding_data);
> +    hmap_init(&data->local_load_balancers);
>   
>       /* Init the tracked data. */
>       hmap_init(&data->tracked_dp_bindings);
> @@ -1128,15 +1127,9 @@ en_runtime_data_cleanup(void *data)
>       sset_destroy(&rt_data->active_tunnels);
>       sset_destroy(&rt_data->egress_ifaces);
>       smap_destroy(&rt_data->local_iface_ids);
> -    struct local_datapath *cur_node, *next_node;
> -    HMAP_FOR_EACH_SAFE (cur_node, next_node, hmap_node,
> -                        &rt_data->local_datapaths) {
> -        free(cur_node->peer_ports);
> -        hmap_remove(&rt_data->local_datapaths, &cur_node->hmap_node);
> -        free(cur_node);
> -    }
> -    hmap_destroy(&rt_data->local_datapaths);
> +    local_datapaths_destroy(&rt_data->local_datapaths);
>       local_binding_data_destroy(&rt_data->lbinding_data);
> +    local_load_balancers_destroy(&rt_data->local_load_balancers);
>   }
>   
>   static void
> @@ -1240,19 +1233,16 @@ en_runtime_data_run(struct engine_node *node, void *data)
>           /* don't cleanup since there is no data yet */
>           first_run = false;
>       } else {
> -        struct local_datapath *cur_node, *next_node;
> -        HMAP_FOR_EACH_SAFE (cur_node, next_node, hmap_node, local_datapaths) {
> -            free(cur_node->peer_ports);
> -            hmap_remove(local_datapaths, &cur_node->hmap_node);
> -            free(cur_node);
> -        }
> -        hmap_clear(local_datapaths);
> +        local_datapaths_destroy(local_datapaths);
>           local_binding_data_destroy(&rt_data->lbinding_data);
> +        local_load_balancers_destroy(&rt_data->local_load_balancers);
>           sset_destroy(local_lports);
>           sset_destroy(local_lport_ids);
>           sset_destroy(active_tunnels);
>           sset_destroy(&rt_data->egress_ifaces);
>           smap_destroy(&rt_data->local_iface_ids);
> +        hmap_init(local_datapaths);
> +        hmap_init(&rt_data->local_load_balancers);
>           sset_init(local_lports);
>           sset_init(local_lport_ids);
>           sset_init(active_tunnels);
> @@ -1703,12 +1693,12 @@ port_groups_runtime_data_handler(struct engine_node *node, void *data)
>                                                    pg_sb->name);
>           ovs_assert(pg_lports);
>   
> -        struct tracked_binding_datapath *tdp;
> +        struct tracked_datapath *tdp;
>           bool need_update = false;
>           HMAP_FOR_EACH (tdp, node, &rt_data->tracked_dp_bindings) {
>               struct shash_node *shash_node;
>               SHASH_FOR_EACH (shash_node, &tdp->lports) {
> -                struct tracked_binding_lport *lport = shash_node->data;
> +                struct tracked_lport *lport = shash_node->data;
>                   if (sset_contains(pg_lports, lport->pb->logical_port)) {
>                       /* At least one local port-binding change is related to the
>                        * port_group, so the port_group_cs_local needs update. */
> @@ -1862,9 +1852,9 @@ ct_zones_runtime_data_handler(struct engine_node *node, void *data OVS_UNUSED)
>       }
>   
>       struct hmap *tracked_dp_bindings = &rt_data->tracked_dp_bindings;
> -    struct tracked_binding_datapath *tdp;
> +    struct tracked_datapath *tdp;
>       HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) {
> -        if (tdp->is_new) {
> +        if (tdp->tracked_type == TRACKED_RESOURCE_NEW) {
>               /* A new datapath has been added. Fall back to full recompute. */
>               return false;
>           }
> @@ -1936,6 +1926,426 @@ struct ed_type_lflow_output {
>       struct lflow_output_persistent_data pd;
>   };
>   
> +struct ed_type_lflow_needs_generation {
> +    /* Tracked data. */
> +    bool tracked;
> +    struct hmap tracked_datapaths;
> +    struct hmap tracked_lbs;
> +};
> +
> +static void *
> +en_lflow_needs_generation_init(struct engine_node *node OVS_UNUSED,
> +                               struct engine_arg *arg OVS_UNUSED)
> +{
> +    struct ed_type_lflow_needs_generation *data = xzalloc(sizeof *data);
> +    hmap_init(&data->tracked_datapaths);
> +    hmap_init(&data->tracked_lbs);
> +    return data;
> +}
> +
> +static void
> +en_lflow_needs_generation_cleanup(void *data OVS_UNUSED)
> +{
> +
> +}
> +
> +static void
> +en_lflow_needs_generation_clear_tracked_data(void *data)
> +{
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = false;
> +    tracked_datapaths_destroy(&lflow_need_gen->tracked_datapaths);
> +    hmap_init(&lflow_need_gen->tracked_datapaths);
> +
> +    tracked_lbs_destroy(&lflow_need_gen->tracked_lbs);
> +    hmap_init(&lflow_need_gen->tracked_lbs);
> +}
> +
> +static void
> +en_lflow_needs_generation_run(struct engine_node *node OVS_UNUSED,
> +                              void *data)
> +{
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = false;
> +
> +    struct sbrec_load_balancer_table *lb_table =
> +        (struct sbrec_load_balancer_table *)EN_OVSDB_GET(
> +            engine_get_input("SB_load_balancer", node));
> +
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +
> +    const struct sbrec_load_balancer *sb_lb;
> +    SBREC_LOAD_BALANCER_TABLE_FOR_EACH (sb_lb, lb_table) {
> +        local_load_balancer_add(&rt_data->local_load_balancers,
> +                                &rt_data->local_datapaths, sb_lb);
> +    }
> +    engine_set_node_state(node, EN_UPDATED);
> +}
> +
> +static bool
> +lflow_needs_generation_runtime_data_handler(struct engine_node *node,
> +                                            void *data)
> +{
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +
> +    /* There is no tracked data. Fall back to full recompute of
> +     * lflow_needs_generation. */
> +    if (!rt_data->tracked) {
> +        return false;
> +    }
> +
> +    struct hmap *tracked_dp_bindings = &rt_data->tracked_dp_bindings;
> +    if (hmap_is_empty(tracked_dp_bindings)) {
> +        return true;
> +    }
> +
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = true;
> +    struct tracked_datapath *tdp;
> +
> +    HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) {
> +        if (tdp->tracked_type == TRACKED_RESOURCE_NEW) {
> +            struct local_datapath *ld =
> +                get_local_datapath(&rt_data->local_datapaths,
> +                                   tdp->dp->tunnel_key);
> +            ovs_assert(ld);
> +            tracked_datapath_add(ld->datapath, TRACKED_RESOURCE_NEW,
> +                                 &lflow_need_gen->tracked_datapaths);
> +            for (size_t i = 0; i < ld->datapath->n_load_balancers; i++) {
> +                const struct sbrec_load_balancer *sb_lb =
> +                    ld->datapath->load_balancers[i];
> +                struct local_load_balancer *local_lb =
> +                    local_load_balancer_get(&rt_data->local_load_balancers,
> +                                            &sb_lb->header_.uuid);
> +                if (!local_lb) {
> +                    local_lb = local_load_balancer_add(
> +                        &rt_data->local_load_balancers,
> +                        &rt_data->local_datapaths, sb_lb);
> +                }
> +                tracked_lb_add(local_lb, TRACKED_RESOURCE_NEW,
> +                               &lflow_need_gen->tracked_lbs);
> +            }
> +        } else if (tdp->tracked_type == TRACKED_RESOURCE_UPDATED) {
> +            struct local_datapath *ld =
> +                get_local_datapath(&rt_data->local_datapaths,
> +                                   tdp->dp->tunnel_key);
> +            if (!ld) {
> +                continue;
> +            }
> +
> +            struct shash_node *shash_node;
> +            SHASH_FOR_EACH (shash_node, &tdp->lports) {
> +                struct tracked_lport *lport = shash_node->data;
> +                if (lport->tracked_type == TRACKED_RESOURCE_REMOVED) {
> +                    tracked_datapath_lport_add(
> +                        lport->pb, TRACKED_RESOURCE_REMOVED,
> +                        &lflow_need_gen->tracked_datapaths);
> +                } else {
> +                    if (lflow_lport_needs_generation(ld, lport->pb)) {
> +                        tracked_datapath_lport_add(
> +                            lport->pb, TRACKED_RESOURCE_NEW,
> +                            &lflow_need_gen->tracked_datapaths);
> +                    }
> +                }
> +            }
> +        }
> +    }
> +
> +    if (!hmap_is_empty(&lflow_need_gen->tracked_datapaths)) {
> +        engine_set_node_state(node, EN_UPDATED);
> +    }
> +
> +    return true;
> +}
> +
> +static bool
> +lflow_needs_generation_datapath_binding_handler(struct engine_node *node,
> +                                                void *data)
> +{
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +    struct sbrec_datapath_binding_table *dp_table =
> +        (struct sbrec_datapath_binding_table *)EN_OVSDB_GET(
> +            engine_get_input("SB_datapath_binding", node));
> +
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = true;
> +
> +    const struct sbrec_datapath_binding *dp;
> +    SBREC_DATAPATH_BINDING_TABLE_FOR_EACH_TRACKED (dp, dp_table) {
> +        if (sbrec_datapath_binding_is_new(dp) ||
> +                sbrec_datapath_binding_is_deleted(dp)) {
> +            continue;
> +        }
> +
> +        struct local_datapath *ldp = get_local_datapath(
> +            &rt_data->local_datapaths, dp->tunnel_key);
> +        if (ldp && lflow_datapath_needs_generation(ldp)) {
> +            tracked_datapath_add(ldp->datapath, TRACKED_RESOURCE_NEW,
> +                                 &lflow_need_gen->tracked_datapaths);
> +        }
> +    }
> +
> +    if (!hmap_is_empty(&lflow_need_gen->tracked_datapaths)) {
> +        engine_set_node_state(node, EN_UPDATED);
> +    }
> +
> +    return true;
> +}
> +
> +static bool
> +lflow_needs_generation_port_binding_handler(struct engine_node *node,
> +                                            void *data)
> +{
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +    struct sbrec_port_binding_table *pb_table =
> +        (struct sbrec_port_binding_table *)EN_OVSDB_GET(
> +            engine_get_input("SB_port_binding", node));
> +
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = true;
> +
> +    const struct sbrec_port_binding *pb;
> +    SBREC_PORT_BINDING_TABLE_FOR_EACH_TRACKED (pb, pb_table) {
> +        struct local_datapath *ldp;
> +        ldp = get_local_datapath(&rt_data->local_datapaths,
> +                                 pb->datapath->tunnel_key);
> +        if (!ldp) {
> +            continue;
> +        }
> +
> +        if (sbrec_port_binding_is_deleted(pb)) {
> +            tracked_datapath_lport_add(pb, TRACKED_RESOURCE_REMOVED,
> +                                       &lflow_need_gen->tracked_datapaths);
> +        } else {
> +            if (lflow_lport_needs_generation(ldp, pb)) {
> +                tracked_datapath_lport_add(
> +                    pb, sbrec_port_binding_is_new(pb) ? TRACKED_RESOURCE_NEW :
> +                    TRACKED_RESOURCE_UPDATED,
> +                    &lflow_need_gen->tracked_datapaths);
> +            }
> +        }
> +    }
> +
> +    if (!hmap_is_empty(&lflow_need_gen->tracked_datapaths)) {
> +        engine_set_node_state(node, EN_UPDATED);
> +    }
> +
> +    return true;
> +}
> +
> +static bool
> +lflow_needs_generation_load_balancer_handler(struct engine_node *node  OVS_UNUSED,
> +                                             void *data OVS_UNUSED)
> +{
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +    struct sbrec_load_balancer_table *lb_table =
> +        (struct sbrec_load_balancer_table *)EN_OVSDB_GET(
> +            engine_get_input("SB_load_balancer", node));
> +
> +    struct ed_type_lflow_needs_generation *lflow_need_gen = data;
> +    lflow_need_gen->tracked = true;
> +
> +    const struct sbrec_load_balancer *sb_lb;
> +    SBREC_LOAD_BALANCER_TABLE_FOR_EACH_TRACKED (sb_lb, lb_table) {
> +        struct local_load_balancer *local_lb;
> +        if (sbrec_load_balancer_is_deleted(sb_lb)) {
> +            local_lb = local_load_balancer_get(&rt_data->local_load_balancers,
> +                                               &sb_lb->header_.uuid);
> +            if (local_lb) {
> +                tracked_lb_add(local_lb, TRACKED_RESOURCE_REMOVED,
> +                               &lflow_need_gen->tracked_lbs);
> +            }
> +        } else {
> +            local_lb = local_load_balancer_get(&rt_data->local_load_balancers,
> +                                               &sb_lb->header_.uuid);
> +            if (!local_lb) {
> +                local_lb = local_load_balancer_add(
> +                    &rt_data->local_load_balancers,
> +                    &rt_data->local_datapaths, sb_lb);
> +                if (local_lb) {
> +                    tracked_lb_add(local_lb, TRACKED_RESOURCE_NEW,
> +                                   &lflow_need_gen->tracked_lbs);
> +                }
> +            } else {
> +                if (lflow_load_balancer_needs_gen(local_lb)) {
> +                    tracked_lb_add(local_lb, TRACKED_RESOURCE_UPDATED,
> +                                   &lflow_need_gen->tracked_lbs);
> +                }
> +            }
> +        }
> +    }
> +
> +    if (!hmap_is_empty(&lflow_need_gen->tracked_lbs)) {
> +        engine_set_node_state(node, EN_UPDATED);
> +    }
> +
> +    return true;
> +}
> +
> +struct ed_type_lflow_generate {
> +    struct hmap generic_lswitch_lflows;
> +    struct hmap generic_lrouter_lflows;
> +
> +    /* Tracked data. */
> +    bool tracked;
> +    struct hmap tracked_datapaths;
> +    struct hmap tracked_lbs;
> +};
> +
> +static void *
> +en_lflow_generate_init(struct engine_node *node OVS_UNUSED,
> +                       struct engine_arg *arg OVS_UNUSED)
> +{
> +    struct ed_type_lflow_generate *data = xzalloc(sizeof *data);
> +
> +    hmap_init(&data->generic_lswitch_lflows);
> +    hmap_init(&data->generic_lrouter_lflows);
> +
> +    build_lswitch_generic_lflows(&data->generic_lswitch_lflows);
> +    build_lrouter_generic_lflows(&data->generic_lrouter_lflows);
> +
> +    hmap_init(&data->tracked_datapaths);
> +    hmap_init(&data->tracked_lbs);
> +    return data;
> +}
> +
> +static void
> +en_lflow_generate_cleanup(void *data)
> +{
> +    struct ed_type_lflow_generate *lflow_generate_data = data;
> +
> +    ovn_ctrl_lflows_destroy(&lflow_generate_data->generic_lswitch_lflows);
> +    ovn_ctrl_lflows_destroy(&lflow_generate_data->generic_lrouter_lflows);
> +}
> +
> +static void
> +en_lflow_generate_clear_tracked_data(void *data)
> +{
> +    struct ed_type_lflow_generate *lflow_gen = data;
> +    lflow_gen->tracked = false;
> +
> +    tracked_datapaths_destroy(&lflow_gen->tracked_datapaths);
> +    hmap_init(&lflow_gen->tracked_datapaths);
> +
> +    tracked_lbs_destroy(&lflow_gen->tracked_lbs);
> +    hmap_init(&lflow_gen->tracked_lbs);
> +}
> +
> +static void
> +en_lflow_generate_run(struct engine_node *node OVS_UNUSED,
> +                      void *data OVS_UNUSED)
> +{
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +
> +    struct ed_type_lflow_generate *lflow_gen = data;
> +    lflow_gen->tracked = false;
> +
> +    lflow_delete_generated_lflows(&rt_data->local_datapaths,
> +                                  &rt_data->local_load_balancers);
> +    lflow_generate_run(&rt_data->local_datapaths,
> +                       &rt_data->local_load_balancers);
> +
> +    engine_set_node_state(node, EN_UPDATED);
> +}
> +
> +static bool
> +lflow_generate_lflow_needs_generation_handler(struct engine_node *node,
> +                                             void *data)
> +{
> +    struct ed_type_lflow_needs_generation *need_lflow_gen =
> +        engine_get_input_data("lflow_needs_generation", node);
> +
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +
> +    /* There is no tracked data. Fall back to full recompute of
> +     * flow_output. */
> +    if (!need_lflow_gen->tracked) {
> +        return false;
> +    }
> +
> +    if (hmap_is_empty(&need_lflow_gen->tracked_datapaths) &&
> +        hmap_is_empty(&need_lflow_gen->tracked_lbs)) {
> +        return true;
> +    }
> +
> +    struct ed_type_lflow_generate *lflow_gen = data;
> +    lflow_gen->tracked = true;
> +
> +    struct tracked_datapath *tdp;
> +    HMAP_FOR_EACH (tdp, node, &need_lflow_gen->tracked_datapaths) {
> +        struct local_datapath *ld =
> +            get_local_datapath(&rt_data->local_datapaths, tdp->dp->tunnel_key);
> +        ovs_assert(ld);
> +
> +        switch (tdp->tracked_type) {
> +        case TRACKED_RESOURCE_NEW:
> +            lflow_generate_datapath_flows(ld, true);
> +            tracked_datapath_add(ld->datapath, TRACKED_RESOURCE_NEW,
> +                                 &lflow_gen->tracked_datapaths);
> +            break;
> +
> +        case TRACKED_RESOURCE_UPDATED: {
> +            struct shash_node *shash_node;
> +            SHASH_FOR_EACH (shash_node, &tdp->lports) {
> +                struct tracked_lport *t_lport = shash_node->data;
> +                if (t_lport->tracked_type == TRACKED_RESOURCE_REMOVED) {
> +                    lflow_delete_generated_lport_lflows(t_lport->pb, ld);
> +                    tracked_datapath_lport_add(
> +                        t_lport->pb, TRACKED_RESOURCE_REMOVED,
> +                        &lflow_gen->tracked_datapaths);
> +                } else {
> +                    lflow_generate_lport_flows(t_lport->pb, ld);
> +                    tracked_datapath_lport_add(t_lport->pb,
> +                                               TRACKED_RESOURCE_NEW,
> +                                               &lflow_gen->tracked_datapaths);
> +                    }
> +            }
> +            break;
> +        }
> +        case TRACKED_RESOURCE_REMOVED:
> +            OVS_NOT_REACHED();
> +        }
> +    }
> +
> +    struct tracked_lb *tlb;
> +    HMAP_FOR_EACH (tlb, node, &need_lflow_gen->tracked_lbs) {
> +        switch (tlb->tracked_type) {
> +        case TRACKED_RESOURCE_NEW:
> +            lflow_generate_load_balancer_lflows(tlb->local_lb);
> +            tracked_lb_add(tlb->local_lb, TRACKED_RESOURCE_NEW,
> +                           &lflow_gen->tracked_lbs);
> +            break;
> +
> +        case TRACKED_RESOURCE_UPDATED:
> +            lflow_clear_generated_lb_lflows(tlb->local_lb);
> +            lflow_generate_load_balancer_lflows(tlb->local_lb);
> +            tracked_lb_add(tlb->local_lb, TRACKED_RESOURCE_UPDATED,
> +                           &lflow_gen->tracked_lbs);
> +            break;
> +
> +        case TRACKED_RESOURCE_REMOVED:
> +            lflow_clear_generated_lb_lflows(tlb->local_lb);
> +            tracked_lb_add(tlb->local_lb, TRACKED_RESOURCE_REMOVED,
> +                           &lflow_gen->tracked_lbs);
> +            break;
> +        }
> +    }
> +
> +    if (!hmap_is_empty(&lflow_gen->tracked_datapaths) ||
> +            !hmap_is_empty(&lflow_gen->tracked_lbs)) {
> +        engine_set_node_state(node, EN_UPDATED);
> +    }
> +
> +    return true;
> +}
> +
>   static void
>   init_lflow_ctx(struct engine_node *node,
>                  struct ed_type_runtime_data *rt_data,
> @@ -2079,6 +2489,8 @@ en_lflow_output_run(struct engine_node *node, void *data)
>   {
>       struct ed_type_runtime_data *rt_data =
>           engine_get_input_data("runtime_data", node);
> +    struct ed_type_lflow_generate *lflow_gen_data =
> +        engine_get_input_data("lflow_generate", node);
>   
>       struct ovsrec_open_vswitch_table *ovs_table =
>           (struct ovsrec_open_vswitch_table *)EN_OVSDB_GET(
> @@ -2128,6 +2540,7 @@ en_lflow_output_run(struct engine_node *node, void *data)
>       struct lflow_ctx_in l_ctx_in;
>       struct lflow_ctx_out l_ctx_out;
>       init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out);
> +
>       lflow_run(&l_ctx_in, &l_ctx_out);
>   
>       if (l_ctx_out.conj_id_overflow) {
> @@ -2146,6 +2559,47 @@ en_lflow_output_run(struct engine_node *node, void *data)
>           }
>       }
>   
> +    /* Disable cachin and lfrr for processing controller lflows. */
> +    l_ctx_out.lfrr = NULL;
> +    l_ctx_out.lflow_cache = NULL;
> +
> +    struct local_datapath *ld;
> +    HMAP_FOR_EACH (ld, hmap_node, &rt_data->local_datapaths) {
> +        lflow_process_ctrl_lflows(ld->active_lflows, ld->datapath,
> +                                  &l_ctx_in, &l_ctx_out);
> +
> +        if (ld->is_switch) {
> +            lflow_process_ctrl_lflows(&lflow_gen_data->generic_lswitch_lflows,
> +                                      ld->datapath,
> +                                      &l_ctx_in, &l_ctx_out);
> +        } else {
> +            lflow_process_ctrl_lflows(&lflow_gen_data->generic_lrouter_lflows,
> +                                      ld->datapath,
> +                                      &l_ctx_in, &l_ctx_out);
> +        }
> +
> +        struct shash_node *shash_node;
> +        SHASH_FOR_EACH (shash_node, &ld->lports) {
> +            struct local_lport *lport = shash_node->data;
> +            lflow_process_ctrl_lflows(lport->active_lflows,
> +                                      lport->pb->datapath,
> +                                      &l_ctx_in, &l_ctx_out);
> +        }
> +
> +        for (size_t i = 0; i < ld->datapath->n_load_balancers; i++) {
> +            const struct sbrec_load_balancer *slb =
> +                ld->datapath->load_balancers[i];
> +            struct local_load_balancer *local_lb =
> +                local_load_balancer_get(&rt_data->local_load_balancers,
> +                                        &slb->header_.uuid);
> +            ovs_assert(local_lb);
> +            struct hmap *lflows = ld->is_switch ? local_lb->active_lswitch_lflows :
> +                                   local_lb->active_lrouter_lflows;
> +            lflow_process_ctrl_lflows(lflows, ld->datapath, &l_ctx_in,
> +                                      &l_ctx_out);
> +        }
> +    }
> +
>       engine_set_node_state(node, EN_UPDATED);
>   }
>   
> @@ -2361,9 +2815,9 @@ lflow_output_runtime_data_handler(struct engine_node *node,
>       struct ed_type_lflow_output *fo = data;
>       init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out);
>   
> -    struct tracked_binding_datapath *tdp;
> +    struct tracked_datapath *tdp;
>       HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) {
> -        if (tdp->is_new) {
> +        if (tdp->tracked_type == TRACKED_RESOURCE_NEW) {
>               if (!lflow_add_flows_for_datapath(tdp->dp, &l_ctx_in,
>                                                 &l_ctx_out)) {
>                   return false;
> @@ -2371,7 +2825,7 @@ lflow_output_runtime_data_handler(struct engine_node *node,
>           } else {
>               struct shash_node *shash_node;
>               SHASH_FOR_EACH (shash_node, &tdp->lports) {
> -                struct tracked_binding_lport *lport = shash_node->data;
> +                struct tracked_lport *lport = shash_node->data;
>                   if (!lflow_handle_flows_for_lport(lport->pb, &l_ctx_in,
>                                                     &l_ctx_out)) {
>                       return false;
> @@ -2418,6 +2872,135 @@ lflow_output_sb_fdb_handler(struct engine_node *node, void *data)
>       return handled;
>   }
>   
> +static bool
> +lflow_output_lflow_generate_handler(struct engine_node *node, void *data)
> +{
> +    struct ed_type_lflow_generate *lflow_gen_data =
> +        engine_get_input_data("lflow_generate", node);
> +
> +    if (!lflow_gen_data->tracked) {
> +        return false;
> +    }
> +
> +    struct ed_type_runtime_data *rt_data =
> +        engine_get_input_data("runtime_data", node);
> +    struct ed_type_lflow_output *fo = data;
> +
> +    struct lflow_ctx_in l_ctx_in;
> +    struct lflow_ctx_out l_ctx_out;
> +    init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out);
> +
> +    /* Disable cachin and lfrr for processing controller lflows. */
> +    l_ctx_out.lfrr = NULL;
> +    l_ctx_out.lflow_cache = NULL;
> +
> +    struct tracked_datapath *tdp;
> +    HMAP_FOR_EACH (tdp, node, &lflow_gen_data->tracked_datapaths) {
> +        struct local_datapath *ldp =
> +            get_local_datapath(&rt_data->local_datapaths, tdp->dp->tunnel_key);
> +        ovs_assert(ldp);
> +
> +        /* Right now this cannot happen.  When a local datapath is removed,
> +         * it should result in full recompute. */
> +        ovs_assert(tdp->tracked_type != TRACKED_RESOURCE_REMOVED);
> +
> +        if (!hmap_is_empty(ldp->cleared_lflows)) {
> +            lflow_remove_ctrl_lflows(ldp->cleared_lflows, &fo->flow_table);
> +            ovn_ctrl_lflows_clear(ldp->cleared_lflows);
> +        }
> +
> +        if (tdp->tracked_type == TRACKED_RESOURCE_NEW) {
> +            lflow_process_ctrl_lflows(ldp->active_lflows, ldp->datapath,
> +                                      &l_ctx_in, &l_ctx_out);
> +
> +            if (ldp->is_switch) {
> +                lflow_process_ctrl_lflows(
> +                    &lflow_gen_data->generic_lswitch_lflows,
> +                    ldp->datapath, &l_ctx_in, &l_ctx_out);
> +            } else {
> +                lflow_process_ctrl_lflows(
> +                    &lflow_gen_data->generic_lrouter_lflows,
> +                    ldp->datapath, &l_ctx_in, &l_ctx_out);
> +            }
> +
> +            struct shash_node *shash_node;
> +            SHASH_FOR_EACH (shash_node, &ldp->lports) {
> +                struct local_lport *dp_lport = shash_node->data;
> +                if (!hmap_is_empty(dp_lport->cleared_lflows)) {
> +                    lflow_remove_ctrl_lflows(dp_lport->cleared_lflows,
> +                                             &fo->flow_table);
> +                    ovn_ctrl_lflows_clear(dp_lport->cleared_lflows);
> +                }
> +                lflow_process_ctrl_lflows(dp_lport->active_lflows,
> +                                          dp_lport->pb->datapath,
> +                                          &l_ctx_in, &l_ctx_out);
> +            }
> +        } else {
> +            struct shash_node *shash_node;
> +            SHASH_FOR_EACH (shash_node, &tdp->lports) {
> +                struct tracked_lport *t_lport = shash_node->data;
> +                struct local_lport *dp_lport =
> +                    local_datapath_get_lport(ldp, t_lport->pb->logical_port);
> +                ovs_assert(dp_lport);
> +                if (!hmap_is_empty(dp_lport->cleared_lflows)) {
> +                    lflow_remove_ctrl_lflows(dp_lport->cleared_lflows,
> +                                             &fo->flow_table);
> +                    ovn_ctrl_lflows_clear(dp_lport->cleared_lflows);
> +                }
> +
> +                if (t_lport->tracked_type == TRACKED_RESOURCE_REMOVED) {
> +                    local_datapath_remove_lport(ldp,
> +                                                dp_lport->pb->logical_port);
> +                } else {
> +                    lflow_process_ctrl_lflows(dp_lport->active_lflows,
> +                                              dp_lport->pb->datapath,
> +                                              &l_ctx_in, &l_ctx_out);
> +                }
> +            }
> +        }
> +    }
> +
> +    struct tracked_lb *tlb;
> +    HMAP_FOR_EACH (tlb, node, &lflow_gen_data->tracked_lbs) {
> +        if (!hmap_is_empty(tlb->local_lb->cleared_lswitch_lflows)) {
> +            lflow_remove_ctrl_lflows(
> +                tlb->local_lb->cleared_lswitch_lflows, &fo->flow_table);
> +            ovn_ctrl_lflows_clear(tlb->local_lb->cleared_lswitch_lflows);
> +        }
> +
> +        if (!hmap_is_empty(tlb->local_lb->cleared_lrouter_lflows)) {
> +            lflow_remove_ctrl_lflows(
> +                tlb->local_lb->cleared_lrouter_lflows, &fo->flow_table);
> +            ovn_ctrl_lflows_clear(tlb->local_lb->cleared_lrouter_lflows);
> +        }
> +
> +        if (tlb->tracked_type == TRACKED_RESOURCE_REMOVED) {
> +            local_load_balancer_remove(&rt_data->local_load_balancers,
> +                                       tlb->local_lb);
> +        } else {
> +            const struct sbrec_load_balancer *slb = tlb->local_lb->ovn_lb->slb;
> +
> +            for (size_t i = 0; i < slb->n_datapaths; i++) {
> +                struct local_datapath *ldp =
> +                    get_local_datapath(&rt_data->local_datapaths,
> +                                       slb->datapaths[i]->tunnel_key);
> +                if (!ldp) {
> +                    continue;
> +                }
> +
> +                struct hmap *lflows =
> +                    ldp->is_switch ? tlb->local_lb->active_lswitch_lflows :
> +                    tlb->local_lb->active_lrouter_lflows;
> +                lflow_process_ctrl_lflows(lflows, ldp->datapath, &l_ctx_in,
> +                                          &l_ctx_out);
> +            }
> +        }
> +    }
> +
> +    engine_set_node_state(node, EN_UPDATED);
> +    return true;
> +}
> +
>   struct ed_type_pflow_output {
>       /* Desired physical flows. */
>       struct ovn_desired_flow_table flow_table;
> @@ -2838,6 +3421,9 @@ main(int argc, char *argv[])
>       ENGINE_NODE(flow_output, "flow_output");
>       ENGINE_NODE(addr_sets, "addr_sets");
>       ENGINE_NODE_WITH_CLEAR_TRACK_DATA(port_groups, "port_groups");
> +    ENGINE_NODE_WITH_CLEAR_TRACK_DATA(lflow_needs_generation,
> +                                      "lflow_needs_generation");
> +    ENGINE_NODE_WITH_CLEAR_TRACK_DATA(lflow_generate, "lflow_generate");
>   
>   #define SB_NODE(NAME, NAME_STR) ENGINE_NODE_SB(NAME, NAME_STR);
>       SB_NODES
> @@ -2883,6 +3469,8 @@ main(int argc, char *argv[])
>                        lflow_output_addr_sets_handler);
>       engine_add_input(&en_lflow_output, &en_port_groups,
>                        lflow_output_port_groups_handler);
> +    engine_add_input(&en_lflow_output, &en_lflow_generate,
> +                     lflow_output_lflow_generate_handler);
>       engine_add_input(&en_lflow_output, &en_runtime_data,
>                        lflow_output_runtime_data_handler);
>   
> @@ -2927,6 +3515,20 @@ main(int argc, char *argv[])
>       engine_add_input(&en_ct_zones, &en_runtime_data,
>                        ct_zones_runtime_data_handler);
>   
> +    engine_add_input(&en_lflow_needs_generation, &en_runtime_data,
> +                     lflow_needs_generation_runtime_data_handler);
> +    engine_add_input(&en_lflow_needs_generation, &en_sb_datapath_binding,
> +                     lflow_needs_generation_datapath_binding_handler);
> +    engine_add_input(&en_lflow_needs_generation, &en_sb_port_binding,
> +                     lflow_needs_generation_port_binding_handler);
> +    engine_add_input(&en_lflow_needs_generation, &en_sb_load_balancer,
> +                     lflow_needs_generation_load_balancer_handler);
> +
> +    engine_add_input(&en_lflow_generate, &en_lflow_needs_generation,
> +                     lflow_generate_lflow_needs_generation_handler);
> +    engine_add_input(&en_lflow_generate, &en_runtime_data,
> +                     engine_noop_handler);
> +
>       engine_add_input(&en_runtime_data, &en_ofctrl_is_connected, NULL);
>   
>       engine_add_input(&en_runtime_data, &en_ovs_open_vswitch, NULL);
> diff --git a/controller/ovn-controller.h b/controller/ovn-controller.h
> index 5d9466880b..172268f972 100644
> --- a/controller/ovn-controller.h
> +++ b/controller/ovn-controller.h
> @@ -17,6 +17,8 @@
>   #ifndef OVN_CONTROLLER_H
>   #define OVN_CONTROLLER_H 1
>   
> +#include "ofctrl.h"
> +
>   #include "simap.h"
>   #include "lib/ovn-sb-idl.h"
>   
> @@ -40,38 +42,6 @@ struct ct_zone_pending_entry {
>       enum ct_zone_pending_state state;
>   };
>   
> -/* A logical datapath that has some relevance to this hypervisor.  A logical
> - * datapath D is relevant to hypervisor H if:
> - *
> - *     - Some VIF or l2gateway or l3gateway port in D is located on H.
> - *
> - *     - D is reachable over a series of hops across patch ports, starting from
> - *       a datapath relevant to H.
> - *
> - * The 'hmap_node''s hash value is 'datapath->tunnel_key'. */
> -struct local_datapath {
> -    struct hmap_node hmap_node;
> -    const struct sbrec_datapath_binding *datapath;
> -
> -    /* The localnet port in this datapath, if any (at most one is allowed). */
> -    const struct sbrec_port_binding *localnet_port;
> -
> -    /* True if this datapath contains an l3gateway port located on this
> -     * hypervisor. */
> -    bool has_local_l3gateway;
> -
> -    struct {
> -        const struct sbrec_port_binding *local;
> -        const struct sbrec_port_binding *remote;
> -    } *peer_ports;
> -
> -    size_t n_peer_ports;
> -    size_t n_allocated_peer_ports;
> -};
> -
> -struct local_datapath *get_local_datapath(const struct hmap *,
> -                                          uint32_t tunnel_key);
> -
>   const struct ovsrec_bridge *get_bridge(const struct ovsrec_bridge_table *,
>                                          const char *br_name);
>   
> diff --git a/controller/patch.c b/controller/patch.c
> index e54b56354b..99a095c577 100644
> --- a/controller/patch.c
> +++ b/controller/patch.c
> @@ -18,6 +18,7 @@
>   #include "patch.h"
>   
>   #include "hash.h"
> +#include "ldata.h"
>   #include "lflow.h"
>   #include "lib/vswitch-idl.h"
>   #include "lport.h"
> diff --git a/controller/physical.c b/controller/physical.c
> index 17ca5afbbd..a029b32f8f 100644
> --- a/controller/physical.c
> +++ b/controller/physical.c
> @@ -13,39 +13,47 @@
>    * limitations under the License.
>    */
>   
> +/* OVS includes. */
>   #include <config.h>
>   #include "binding.h"
>   #include "coverage.h"
>   #include "byte-order.h"
> -#include "encaps.h"
>   #include "flow.h"
> -#include "ha-chassis.h"
> -#include "lflow.h"
> -#include "lport.h"
> -#include "chassis.h"
> +#include "include/openvswitch/poll-loop.h"
> +#include "include/openvswitch/list.h"
> +#include "include/openvswitch/hmap.h"
> +#include "include/openvswitch/match.h"
> +#include "include/openvswitch/ofp-actions.h"
> +#include "include/openvswitch/ofpbuf.h"
> +#include "include/openvswitch/ofp-parse.h"
> +#include "include/openvswitch/shash.h"
> +#include "include/openvswitch/vlog.h"
>   #include "lib/bundle.h"
> -#include "openvswitch/poll-loop.h"
> +#include "lib/hmapx.h"
>   #include "lib/uuid.h"
> -#include "ofctrl.h"
> -#include "openvswitch/list.h"
> -#include "openvswitch/hmap.h"
> -#include "openvswitch/match.h"
> -#include "openvswitch/ofp-actions.h"
> -#include "openvswitch/ofpbuf.h"
> -#include "openvswitch/vlog.h"
> -#include "openvswitch/ofp-parse.h"
> -#include "ovn-controller.h"
> +#include "lib/simap.h"
> +#include "lib/smap.h"
> +#include "lib/sset.h"
> +#include "lib/util.h"
> +#include "vswitch-idl.h"
> +
> +
> +/* OVN includes. */
> +#include "binding.h"
> +#include "chassis.h"
> +#include "encaps.h"
> +#include "ha-chassis.h"
> +#include "ldata.h"
> +#include "lflow.h"
>   #include "lib/chassis-index.h"
>   #include "lib/ovn-sb-idl.h"
>   #include "lib/ovn-util.h"
> +#include "lport.h"
> +#include "ofctrl.h"
> +#include "ovn-controller.h"
>   #include "physical.h"
> -#include "openvswitch/shash.h"
> -#include "simap.h"
> -#include "smap.h"
> -#include "sset.h"
> -#include "util.h"
> -#include "vswitch-idl.h"
> -#include "hmapx.h"
> +
> +
>   
>   VLOG_DEFINE_THIS_MODULE(physical);
>   
> @@ -270,7 +278,7 @@ put_remote_port_redirect_bridged(const struct
>           uint32_t ls_dp_key = 0;
>           for (int i = 0; i < ld->n_peer_ports; i++) {
>               const struct sbrec_port_binding *sport_binding =
> -                ld->peer_ports[i].remote;
> +                ld->peer_ports[i].remote->pb;
>               const char *sport_peer_name =
>                   smap_get(&sport_binding->options, "peer");
>               const char *distributed_port =
> @@ -545,7 +553,7 @@ put_replace_chassis_mac_flows(const struct simap *ct_zones,
>   
>       for (int i = 0; i < ld->n_peer_ports; i++) {
>           const struct sbrec_port_binding *rport_binding =
> -            ld->peer_ports[i].remote;
> +            ld->peer_ports[i].remote->pb;
>           struct eth_addr router_port_mac;
>           char *err_str = NULL;
>           struct match match;
> @@ -683,7 +691,7 @@ put_replace_router_port_mac_flows(struct ovsdb_idl_index
>   
>       for (int i = 0; i < ld->n_peer_ports; i++) {
>           const struct sbrec_port_binding *rport_binding =
> -            ld->peer_ports[i].remote;
> +            ld->peer_ports[i].remote->pb;
>           struct eth_addr router_port_mac;
>           struct match match;
>           struct ofpact_mac *replace_mac;
> diff --git a/controller/pinctrl.c b/controller/pinctrl.c
> index 78ecfed840..e31b1a9c39 100644
> --- a/controller/pinctrl.c
> +++ b/controller/pinctrl.c
> @@ -50,6 +50,7 @@
>   #include "lib/mcast-group-index.h"
>   #include "lib/ovn-l7.h"
>   #include "lib/ovn-util.h"
> +#include "lib/ldata.h"
>   #include "ovn/logical-fields.h"
>   #include "openvswitch/poll-loop.h"
>   #include "openvswitch/rconn.h"
> @@ -1184,7 +1185,7 @@ fill_ipv6_prefix_state(struct ovsdb_idl_txn *ovnsb_idl_txn,
>       bool changed = false;
>   
>       for (size_t i = 0; i < ld->n_peer_ports; i++) {
> -        const struct sbrec_port_binding *pb = ld->peer_ports[i].local;
> +        const struct sbrec_port_binding *pb = ld->peer_ports[i].local->pb;
>           struct ipv6_prefixd_state *pfd;
>   
>           if (!smap_get_bool(&pb->options, "ipv6_prefix", false)) {
> @@ -1264,7 +1265,7 @@ prepare_ipv6_prefixd(struct ovsdb_idl_txn *ovnsb_idl_txn,
>           }
>   
>           for (size_t i = 0; i < ld->n_peer_ports; i++) {
> -            const struct sbrec_port_binding *pb = ld->peer_ports[i].local;
> +            const struct sbrec_port_binding *pb = ld->peer_ports[i].local->pb;
>               int j;
>   
>               if (!smap_get_bool(&pb->options, "ipv6_prefix_delegation",
> @@ -3895,10 +3896,11 @@ prepare_ipv6_ras(const struct hmap *local_datapaths)
>       HMAP_FOR_EACH (ld, hmap_node, local_datapaths) {
>   
>           for (size_t i = 0; i < ld->n_peer_ports; i++) {
> -            const struct sbrec_port_binding *peer = ld->peer_ports[i].remote;
> -            const struct sbrec_port_binding *pb = ld->peer_ports[i].local;
> +            const struct sbrec_port_binding *peer = ld->peer_ports[i].remote->pb;
> +            const struct sbrec_port_binding *pb = ld->peer_ports[i].local->pb;
>   
> -            if (!smap_get_bool(&pb->options, "ipv6_ra_send_periodic", false)) {
> +            if (!smap_get_bool(&pb->options, "ipv6_ra_send_periodic",
> +                               false)) {
>                   continue;
>               }
>   
> @@ -4132,8 +4134,8 @@ send_garp_locally(struct ovsdb_idl_txn *ovnsb_idl_txn,
>   
>       ovs_assert(ldp);
>       for (size_t i = 0; i < ldp->n_peer_ports; i++) {
> -        const struct sbrec_port_binding *local = ldp->peer_ports[i].local;
> -        const struct sbrec_port_binding *remote = ldp->peer_ports[i].remote;
> +        const struct sbrec_port_binding *local = ldp->peer_ports[i].local->pb;
> +        const struct sbrec_port_binding *remote = ldp->peer_ports[i].remote->pb;
>   
>           /* Skip "ingress" port. */
>           if (local == in_pb) {
> @@ -4228,7 +4230,7 @@ run_buffered_binding(struct ovsdb_idl_index *sbrec_mac_binding_by_lport_ip,
>   
>           for (size_t i = 0; i < ld->n_peer_ports; i++) {
>   
> -            const struct sbrec_port_binding *pb = ld->peer_ports[i].local;
> +            const struct sbrec_port_binding *pb = ld->peer_ports[i].local->pb;
>               struct buffered_packets *cur_qp, *next_qp;
>               HMAP_FOR_EACH_SAFE (cur_qp, next_qp, hmap_node,
>                                   &buffered_packets_map) {
> diff --git a/lib/automake.mk b/lib/automake.mk
> index 917b28e1ed..6d26f78383 100644
> --- a/lib/automake.mk
> +++ b/lib/automake.mk
> @@ -29,7 +29,11 @@ lib_libovn_la_SOURCES = \
>   	lib/inc-proc-eng.c \
>   	lib/inc-proc-eng.h \
>   	lib/lb.c \
> -	lib/lb.h
> +	lib/lb.h \
> +	lib/lflow.c \
> +	lib/lflow.h \
> +	lib/ldata.c \
> +	lib/ldata.h
>   nodist_lib_libovn_la_SOURCES = \
>   	lib/ovn-dirs.c \
>   	lib/ovn-nb-idl.c \
> diff --git a/lib/lb.c b/lib/lb.c
> index 4cb46b346c..942e665049 100644
> --- a/lib/lb.c
> +++ b/lib/lb.c
> @@ -48,6 +48,7 @@ bool ovn_lb_vip_init(struct ovn_lb_vip *lb_vip, const char *lb_key,
>       /* Format for backend ips: "IP1:port1,IP2:port2,...". */
>       size_t n_backends = 0;
>       size_t n_allocated_backends = 0;
> +    lb_vip->backend_ips = xstrdup(lb_value);
>       char *tokstr = xstrdup(lb_value);
>       char *save_ptr = NULL;
>       for (char *token = strtok_r(tokstr, ",", &save_ptr);
> @@ -95,6 +96,7 @@ void ovn_lb_vip_destroy(struct ovn_lb_vip *vip)
>           free(vip->backends[i].ip_str);
>       }
>       free(vip->backends);
> +    free(vip->backend_ips);
>   }
>   
>   static
> @@ -276,6 +278,8 @@ ovn_controller_lb_create(const struct sbrec_load_balancer *sbrec_lb)
>       SMAP_FOR_EACH (node, &sbrec_lb->vips) {
>           struct ovn_lb_vip *lb_vip = &lb->vips[n_vips];
>   
> +        lb_vip->empty_backend_rej = smap_get_bool(&sbrec_lb->options,
> +                                                  "reject", false);
>           if (!ovn_lb_vip_init(lb_vip, node->key, node->value)) {
>               continue;
>           }
> @@ -292,6 +296,28 @@ ovn_controller_lb_create(const struct sbrec_load_balancer *sbrec_lb)
>                                              false);
>       ovn_lb_get_hairpin_snat_ip(&sbrec_lb->header_.uuid, &sbrec_lb->options,
>                                  &lb->hairpin_snat_ips);
> +
> +    if (lb->slb->n_selection_fields) {
> +        char *proto = NULL;
> +        if (sbrec_lb->protocol && sbrec_lb->protocol[0]) {
> +            proto = sbrec_lb->protocol;
> +        }
> +
> +        struct ds sel_fields = DS_EMPTY_INITIALIZER;
> +        for (size_t i = 0; i < lb->slb->n_selection_fields; i++) {
> +            char *field = lb->slb->selection_fields[i];
> +            if (!strcmp(field, "tp_src") && proto) {
> +                ds_put_format(&sel_fields, "%s_src,", proto);
> +            } else if (!strcmp(field, "tp_dst") && proto) {
> +                ds_put_format(&sel_fields, "%s_dst,", proto);
> +            } else {
> +                ds_put_format(&sel_fields, "%s,", field);
> +            }
> +        }
> +        ds_chomp(&sel_fields, ',');
> +        lb->selection_fields = ds_steal_cstr(&sel_fields);
> +    }
> +
>       return lb;
>   }
>   
> @@ -303,5 +329,6 @@ ovn_controller_lb_destroy(struct ovn_controller_lb *lb)
>       }
>       free(lb->vips);
>       destroy_lport_addresses(&lb->hairpin_snat_ips);
> +    free(lb->selection_fields);
>       free(lb);
>   }
> diff --git a/lib/lb.h b/lib/lb.h
> index 58e6bb031b..64ef4a5197 100644
> --- a/lib/lb.h
> +++ b/lib/lb.h
> @@ -55,6 +55,7 @@ struct ovn_lb_vip {
>       struct ovn_lb_backend *backends;
>       size_t n_backends;
>       bool empty_backend_rej;
> +    char *backend_ips;
>   };
>   
>   struct ovn_lb_backend {
> @@ -99,6 +100,7 @@ struct ovn_controller_lb {
>                                                 * as source for hairpinned
>                                                 * traffic.
>                                                 */
> +    char *selection_fields;
>   };
>   
>   struct ovn_controller_lb *ovn_controller_lb_create(
> diff --git a/lib/ldata.c b/lib/ldata.c
> new file mode 100644
> index 0000000000..992f5f2e45
> --- /dev/null
> +++ b/lib/ldata.c
> @@ -0,0 +1,895 @@
> +/* Copyright (c) 2021, Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#include <config.h>
> +
> +/* OVS includes. */
> +#include "include/openvswitch/json.h"
> +#include "lib/hmapx.h"
> +#include "lib/util.h"
> +#include "openvswitch/vlog.h"
> +
> +/* OVN includes. */
> +#include "ldata.h"
> +#include "lib/ovn-util.h"
> +#include "lib/ovn-sb-idl.h"
> +#include "lib/lflow.h"
> +#include "lib/lb.h"
> +
> +VLOG_DEFINE_THIS_MODULE(ldata);
> +
> +static struct local_datapath *local_datapath_add__(
> +    struct hmap *local_datapaths,
> +    const struct sbrec_datapath_binding *,
> +    struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +    int depth,
> +    void (*datapath_added)(struct local_datapath *,
> +                           void *aux),
> +    void *aux);
> +
> +static void local_lport_init_cache(struct local_lport *);
> +static void local_lport_update_lsp_data(struct local_lport *);
> +static void local_lport_update_lrp_data(struct local_lport *);
> +static void local_lport_destroy_lsp_data(struct local_lport *);
> +static void local_lport_destroy_lrp_data(struct local_lport *);
> +static void local_lport_init_lflow_gen_data(struct local_lport *);
> +static void local_lport_destroy_lflow_gen_data(struct local_lport *);
> +
> +static struct tracked_datapath *tracked_datapath_create(
> +    const struct sbrec_datapath_binding *dp,
> +    enum en_tracked_resource_type tracked_type,
> +    struct hmap *tracked_datapaths);
> +
> +static struct local_load_balancer *local_load_balancer_add__(
> +    struct hmap *local_lbs, const struct sbrec_load_balancer *);
> +static void local_load_balancer_destroy(struct local_load_balancer *);
> +
> +struct local_datapath *
> +get_local_datapath(const struct hmap *local_datapaths, uint32_t tunnel_key)
> +{
> +    struct hmap_node *node = hmap_first_with_hash(local_datapaths, tunnel_key);
> +    return (node
> +            ? CONTAINER_OF(node, struct local_datapath, hmap_node)
> +            : NULL);
> +}
> +
> +struct local_datapath *
> +local_datapath_alloc(const struct sbrec_datapath_binding *dp)
> +{
> +    struct local_datapath *ld = xzalloc(sizeof *ld);
> +    ld->datapath = dp;
> +    ld->is_switch = datapath_is_switch(dp);
> +    hmap_init(&ld->ctrl_lflows[0]);
> +    hmap_init(&ld->ctrl_lflows[1]);
> +    ld->active_lflows = &ld->ctrl_lflows[0];
> +    ld->cleared_lflows = &ld->ctrl_lflows[1];
> +    shash_init(&ld->lports);
> +    smap_clone(&ld->dp_options, &dp->options);
> +    return ld;
> +}
> +
> +void
> +local_datapaths_destroy(struct hmap *local_datapaths)
> +{
> +    struct local_datapath *ld;
> +    HMAP_FOR_EACH_POP (ld, hmap_node, local_datapaths) {
> +        local_datapath_destroy(ld);
> +    }
> +
> +    hmap_destroy(local_datapaths);
> +}
> +
> +void
> +local_datapath_destroy(struct local_datapath *ld)
> +{
> +    ovn_ctrl_lflows_destroy(&ld->ctrl_lflows[0]);
> +    ovn_ctrl_lflows_destroy(&ld->ctrl_lflows[1]);
> +
> +    struct shash_node *node, *next;
> +    SHASH_FOR_EACH_SAFE (node, next, &ld->lports) {
> +        hmap_remove(&ld->lports.map, &node->node);
> +        local_lport_destroy(node->data);
> +        free(node->name);
> +        free(node);
> +    }
> +
> +    hmap_destroy(&ld->lports.map);
> +    free(ld->peer_ports);
> +    smap_destroy(&ld->dp_options);
> +    free(ld);
> +}
> +
> +void
> +local_datapath_add(struct hmap *local_datapaths,
> +                   const struct sbrec_datapath_binding *dp,
> +                   struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +                   struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +                   struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +                   void (*datapath_added_cb)(
> +                         struct local_datapath *ld,
> +                         void *aux),
> +                   void *aux)
> +{
> +    local_datapath_add__(local_datapaths, dp, sbrec_datapath_binding_by_key,
> +                         sbrec_port_binding_by_datapath,
> +                         sbrec_port_binding_by_name, 0,
> +                         datapath_added_cb, aux);
> +}
> +
> +void
> +local_datapath_switch_lflow_map(struct local_datapath *ldp)
> +{
> +    struct hmap *temp = ldp->active_lflows;
> +    ldp->active_lflows = ldp->cleared_lflows;
> +    ldp->cleared_lflows = temp;
> +
> +    /* Make sure that the active_lflows is empty. */
> +    ovs_assert(hmap_is_empty(ldp->active_lflows));
> +}
> +
> +void
> +local_datapath_add_or_update_peer_port(
> +    const struct sbrec_port_binding *pb,
> +    struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +    struct local_datapath *ld,
> +    struct hmap *local_datapaths,
> +    void (*datapath_added_cb)(
> +                         struct local_datapath *ld,
> +                         void *aux),
> +    void *aux)
> +{
> +    struct local_lport *lport = local_datapath_get_lport(ld, pb->logical_port);
> +    ovs_assert(lport);
> +
> +    const struct sbrec_port_binding *peer;
> +    peer = lport_get_peer(pb, sbrec_port_binding_by_name);
> +
> +    if (!peer) {
> +        if (lport->peer) {
> +            /* The peer is updated.  Remove the lport and the removed 'peer'
> +             * from the local datapath's peer ports. */
> +            local_datapath_remove_peer_port(pb, ld, local_datapaths);
> +        }
> +        return;
> +    }
> +
> +    if (lport->peer && lport->peer->pb != peer) {
> +        /* The peer port is updated. Remove the old one. */
> +        local_datapath_remove_peer_port(pb, ld, local_datapaths);
> +    }
> +
> +    struct local_datapath *peer_ld =
> +        get_local_datapath(local_datapaths, peer->datapath->tunnel_key);
> +    if (!peer_ld){
> +        peer_ld = local_datapath_add__(local_datapaths, peer->datapath,
> +                                       sbrec_datapath_binding_by_key,
> +                                       sbrec_port_binding_by_datapath,
> +                                       sbrec_port_binding_by_name, 1,
> +                                       datapath_added_cb, aux);
> +    }
> +
> +    struct local_lport *peer_lport =
> +        local_datapath_get_lport(peer_ld, peer->logical_port);
> +
> +    if (!peer_lport) {
> +        return;
> +    }
> +
> +    bool present = false;
> +    for (size_t i = 0; i < ld->n_peer_ports; i++) {
> +        if (ld->peer_ports[i].local == lport) {
> +            present = true;
> +            break;
> +        }
> +    }
> +
> +    if (!present) {
> +        ld->n_peer_ports++;
> +        if (ld->n_peer_ports > ld->n_allocated_peer_ports) {
> +            ld->peer_ports =
> +                x2nrealloc(ld->peer_ports,
> +                           &ld->n_allocated_peer_ports,
> +                           sizeof *ld->peer_ports);
> +        }
> +        ld->peer_ports[ld->n_peer_ports - 1].local = lport;
> +        ld->peer_ports[ld->n_peer_ports - 1].remote = peer_lport;
> +    }
> +
> +    lport->peer = peer_lport;
> +    peer_lport->peer = lport;
> +
> +    for (size_t i = 0; i < peer_ld->n_peer_ports; i++) {
> +        if (peer_ld->peer_ports[i].local == peer_lport) {
> +            return;
> +        }
> +    }
> +
> +    peer_ld->n_peer_ports++;
> +    if (peer_ld->n_peer_ports > peer_ld->n_allocated_peer_ports) {
> +        peer_ld->peer_ports =
> +            x2nrealloc(peer_ld->peer_ports,
> +                        &peer_ld->n_allocated_peer_ports,
> +                        sizeof *peer_ld->peer_ports);
> +    }
> +    peer_ld->peer_ports[peer_ld->n_peer_ports - 1].local = peer_lport;
> +    peer_ld->peer_ports[peer_ld->n_peer_ports - 1].remote = lport;
> +}
> +
> +void
> +local_datapath_remove_peer_port(const struct sbrec_port_binding *pb,
> +                                struct local_datapath *ld,
> +                                struct hmap *local_datapaths)
> +{
> +    struct local_lport *lport = local_datapath_get_lport(ld, pb->logical_port);
> +    if (!lport) {
> +        return;
> +    }
> +
> +    size_t i = 0;
> +    for (i = 0; i < ld->n_peer_ports; i++) {
> +        if (ld->peer_ports[i].local == lport) {
> +            break;
> +        }
> +    }
> +
> +    if (i == ld->n_peer_ports) {
> +        return;
> +    }
> +
> +    struct local_lport *peer = ld->peer_ports[i].remote;
> +
> +    /* Possible improvement: We can shrink the allocated peer ports
> +     * if (ld->n_peer_ports < ld->n_allocated_peer_ports / 2).
> +     */
> +    ld->peer_ports[i].local = ld->peer_ports[ld->n_peer_ports - 1].local;
> +    ld->peer_ports[i].remote = ld->peer_ports[ld->n_peer_ports - 1].remote;
> +    ld->n_peer_ports--;
> +
> +    struct local_datapath *peer_ld = peer->ldp;
> +    if (peer_ld) {
> +        /* Remove the peer port from the peer datapath. The peer
> +         * datapath also tries to remove its peer lport, but that would
> +         * be no-op. */
> +        local_datapath_remove_peer_port(peer->pb, peer_ld, local_datapaths);
> +    }
> +
> +    if (lport->peer) {
> +        lport->peer->peer = NULL;
> +    }
> +    lport->peer = NULL;
> +}
> +
> +struct local_lport *
> +local_datapath_add_lport(struct local_datapath *ld,
> +                         const char *lport_name,
> +                         const struct sbrec_port_binding *pb)
> +{
> +    struct local_lport *dp_lport = local_datapath_get_lport(ld, lport_name);
> +    if (!dp_lport) {
> +        dp_lport = xzalloc(sizeof *dp_lport);
> +        dp_lport->pb = pb;
> +
> +        hmap_init(&dp_lport->ctrl_lflows[0]);
> +        hmap_init(&dp_lport->ctrl_lflows[1]);
> +        dp_lport->active_lflows = &dp_lport->ctrl_lflows[0];
> +        dp_lport->cleared_lflows = &dp_lport->ctrl_lflows[1];
> +        smap_init(&dp_lport->options);
> +        shash_add(&ld->lports, lport_name, dp_lport);
> +        dp_lport->ldp = ld;
> +        dp_lport->type = get_lport_type(pb);
> +        local_lport_init_cache(dp_lport);
> +    }
> +
> +    return dp_lport;
> +}
> +
> +struct local_lport *
> +local_datapath_get_lport(struct local_datapath *ld, const char *lport_name)
> +{
> +    struct shash_node *node = shash_find(&ld->lports, lport_name);
> +    return node ? node->data : NULL;
> +}
> +
> +void
> +local_datapath_remove_lport(struct local_datapath *ld, const char *lport_name)
> +{
> +    struct local_lport *dp_lport = shash_find_and_delete(&ld->lports,
> +                                                         lport_name);
> +    if (dp_lport) {
> +        local_lport_destroy(dp_lport);
> +    }
> +}
> +
> +bool
> +local_lport_update_cache(struct local_lport *lport)
> +{
> +    if (local_lport_is_cache_old(lport)) {
> +        local_lport_clear_cache(lport);
> +        local_lport_init_cache(lport);
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
> +
> +void
> +local_lport_clear_cache(struct local_lport *lport)
> +{
> +    for (size_t i = 0; i < lport->n_addresses; i++) {
> +        free(lport->addresses[i]);
> +    }
> +    free(lport->addresses);
> +    lport->n_addresses = 0;
> +    for (size_t i = 0; i < lport->n_port_security; i++) {
> +        free(lport->port_security[i]);
> +    }
> +    free(lport->port_security);
> +    lport->n_port_security = 0;
> +
> +    smap_destroy(&lport->options);
> +
> +    local_lport_destroy_lflow_gen_data(lport);
> +}
> +
> +bool
> +local_lport_is_cache_old(struct local_lport *lport)
> +{
> +    const struct sbrec_port_binding *pb = lport->pb;
> +
> +    if (lport->n_addresses != pb->n_mac) {
> +        return true;
> +    }
> +
> +    if (lport->n_port_security != pb->n_port_security) {
> +        return true;
> +    }
> +
> +    if (!smap_equal(&lport->options, &pb->options)) {
> +        return true;
> +    }
> +
> +    for (size_t i = 0; i < lport->n_addresses; i++) {
> +        if (strcmp(lport->addresses[i], pb->mac[i])) {
> +            return true;
> +        }
> +    }
> +
> +    for (size_t i = 0; i < lport->n_port_security; i++) {
> +        if (strcmp(lport->port_security[i], pb->port_security[i])) {
> +            return true;
> +        }
> +    }
> +
> +    bool claimed_ = !!pb->chassis;
> +
> +    return (lport->claimed != claimed_);
> +}
> +
> +static void
> +local_lport_init_lflow_gen_data(struct local_lport *lport)
> +{
> +    struct ds json_key = DS_EMPTY_INITIALIZER;
> +    json_string_escape(lport->pb->logical_port, &json_key);
> +    lport->json_key = ds_steal_cstr(&json_key);
> +
> +    if (lport->ldp->is_switch) {
> +        local_lport_update_lsp_data(lport);
> +    } else {
> +        local_lport_update_lrp_data(lport);
> +    }
> +}
> +
> +static void
> +local_lport_destroy_lflow_gen_data(struct local_lport *lport)
> +{
> +    free(lport->json_key);
> +    lport->json_key = NULL;
> +    if (lport->ldp->is_switch) {
> +        local_lport_destroy_lsp_data(lport);
> +    } else {
> +        local_lport_destroy_lrp_data(lport);
> +    }
> +}
> +
> +void
> +local_lport_switch_lflow_map(struct local_lport *lport)
> +{
> +    struct hmap *temp = lport->active_lflows;
> +    lport->active_lflows = lport->cleared_lflows;
> +    lport->cleared_lflows = temp;
> +
> +    /* Make sure that the active_lflows is empty. */
> +    ovs_assert(hmap_is_empty(lport->active_lflows));
> +}
> +
> +struct local_lport *
> +local_datapath_unlink_lport(struct local_datapath *ld,
> +                                                const char *lport_name)
> +{
> +    return shash_find_and_delete(&ld->lports, lport_name);
> +}
> +
> +void
> +local_lport_destroy(struct local_lport *dp_lport)
> +{
> +    ovn_ctrl_lflows_destroy(&dp_lport->ctrl_lflows[0]);
> +    ovn_ctrl_lflows_destroy(&dp_lport->ctrl_lflows[1]);
> +    local_lport_clear_cache(dp_lport);
> +    free(dp_lport);
> +}
> +
> +struct tracked_datapath *
> +tracked_datapath_add(const struct sbrec_datapath_binding *dp,
> +                     enum en_tracked_resource_type tracked_type,
> +                     struct hmap *tracked_datapaths)
> +{
> +    struct tracked_datapath *t_dp =
> +        tracked_datapath_find(tracked_datapaths, dp);
> +    if (!t_dp) {
> +        t_dp = tracked_datapath_create(dp, tracked_type, tracked_datapaths);
> +    } else {
> +        t_dp->tracked_type = tracked_type;
> +    }
> +
> +    return t_dp;
> +}
> +
> +struct tracked_datapath *
> +tracked_datapath_find(struct hmap *tracked_datapaths,
> +                      const struct sbrec_datapath_binding *dp)
> +{
> +    struct tracked_datapath *t_dp;
> +    size_t hash = uuid_hash(&dp->header_.uuid);
> +    HMAP_FOR_EACH_WITH_HASH (t_dp, node, hash, tracked_datapaths) {
> +        if (uuid_equals(&t_dp->dp->header_.uuid, &dp->header_.uuid)) {
> +            return t_dp;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +void
> +tracked_datapath_lport_add(const struct sbrec_port_binding *pb,
> +                           enum en_tracked_resource_type tracked_type,
> +                           struct hmap *tracked_datapaths)
> +{
> +    struct tracked_datapath *tracked_dp =
> +        tracked_datapath_find(tracked_datapaths, pb->datapath);
> +    if (!tracked_dp) {
> +        tracked_dp = tracked_datapath_create(pb->datapath,
> +                                             TRACKED_RESOURCE_UPDATED,
> +                                             tracked_datapaths);
> +    }
> +
> +    /* Check if the lport is already present or not.
> +     * If it is already present, then just update the 'pb' field. */
> +    struct tracked_lport *lport =
> +        shash_find_data(&tracked_dp->lports, pb->logical_port);
> +
> +    if (!lport) {
> +        lport = xmalloc(sizeof *lport);
> +        shash_add(&tracked_dp->lports, pb->logical_port, lport);
> +    }
> +
> +    lport->pb = pb;
> +    lport->tracked_type = tracked_type;
> +}
> +
> +void
> +tracked_datapaths_destroy(struct hmap *tracked_datapaths)
> +{
> +    struct tracked_datapath *t_dp;
> +    HMAP_FOR_EACH_POP (t_dp, node, tracked_datapaths) {
> +        shash_destroy_free_data(&t_dp->lports);
> +        free(t_dp);
> +    }
> +
> +    hmap_destroy(tracked_datapaths);
> +}
> +
> +struct local_load_balancer *
> +local_load_balancer_add(struct hmap *local_lbs,
> +                        struct hmap *local_datapaths,
> +                        const struct sbrec_load_balancer *sb_lb)
> +{
> +    bool is_local_lb = false;
> +    for (size_t i = 0; i < sb_lb->n_datapaths; i++) {
> +        if (get_local_datapath(local_datapaths,
> +                               sb_lb->datapaths[i]->tunnel_key)) {
> +            is_local_lb = true;
> +            break;
> +        }
> +    }
> +
> +    if (!is_local_lb) {
> +        return NULL;
> +    }
> +
> +    return local_load_balancer_add__(local_lbs, sb_lb);
> +}
> +
> +struct local_load_balancer *
> +local_load_balancer_get(struct hmap *lbs, const struct uuid *uuid)
> +{
> +    struct local_load_balancer *local_lb;
> +    size_t hash = uuid_hash(uuid);
> +    HMAP_FOR_EACH_WITH_HASH (local_lb, hmap_node, hash, lbs) {
> +        if (uuid_equals(&local_lb->ovn_lb->slb->header_.uuid, uuid)) {
> +            return local_lb;
> +        }
> +    }
> +    return NULL;
> +}
> +
> +void local_load_balancer_remove(struct hmap *local_lbs,
> +                                struct local_load_balancer *local_lb)
> +{
> +    hmap_remove(local_lbs, &local_lb->hmap_node);
> +    local_load_balancer_destroy(local_lb);
> +}
> +
> +void
> +local_load_balancers_destroy(struct hmap *local_lbs)
> +{
> +    struct local_load_balancer *lb;
> +    HMAP_FOR_EACH_POP (lb, hmap_node, local_lbs) {
> +        local_load_balancer_destroy(lb);
> +    }
> +
> +    hmap_destroy(local_lbs);
> +}
> +
> +void
> +local_load_balancer_update(struct local_load_balancer *local_lb)
> +{
> +    const struct sbrec_load_balancer *sb_lb = local_lb->ovn_lb->slb;
> +    ovn_controller_lb_destroy(local_lb->ovn_lb);
> +    local_lb->ovn_lb = ovn_controller_lb_create(sb_lb);
> +}
> +
> +void
> +local_load_balancer_switch_lflow_map(struct local_load_balancer *local_lb)
> +{
> +    struct hmap *temp = local_lb->active_lswitch_lflows;
> +    local_lb->active_lswitch_lflows = local_lb->cleared_lswitch_lflows;
> +    local_lb->cleared_lswitch_lflows = temp;
> +
> +    /* Make sure that the active_lflows is empty. */
> +    ovs_assert(hmap_is_empty(local_lb->active_lswitch_lflows));
> +
> +    temp = local_lb->active_lrouter_lflows;
> +    local_lb->active_lrouter_lflows = local_lb->cleared_lrouter_lflows;
> +    local_lb->cleared_lrouter_lflows = temp;
> +
> +    /* Make sure that the active_lflows is empty. */
> +    ovs_assert(hmap_is_empty(local_lb->active_lrouter_lflows));
> +}
> +
> +void
> +tracked_lb_add(struct local_load_balancer *local_lb,
> +               enum en_tracked_resource_type tracked_type,
> +               struct hmap *tracked_lbs)
> +{
> +    struct tracked_lb *t_lb = NULL;
> +
> +    struct tracked_lb *tmp = NULL;
> +    uint32_t hash = uuid_hash(&local_lb->ovn_lb->slb->header_.uuid);
> +    HMAP_FOR_EACH_WITH_HASH (tmp, node, hash, tracked_lbs) {
> +        if (uuid_equals(&tmp->local_lb->ovn_lb->slb->header_.uuid,
> +                        &local_lb->ovn_lb->slb->header_.uuid)) {
> +            t_lb = tmp;
> +            break;
> +        }
> +    }
> +
> +    if (!t_lb) {
> +        t_lb = xzalloc(sizeof *t_lb);
> +        hmap_insert(tracked_lbs, &t_lb->node,
> +                    uuid_hash(&local_lb->ovn_lb->slb->header_.uuid));
> +    }
> +
> +    t_lb->local_lb = local_lb;
> +    t_lb->tracked_type = tracked_type;
> +}
> +
> +void
> +tracked_lbs_destroy(struct hmap *tracked_lbs)
> +{
> +    struct tracked_lb *t_lb;
> +    HMAP_FOR_EACH_POP (t_lb, node, tracked_lbs) {
> +        free(t_lb);
> +    }
> +
> +    hmap_destroy(tracked_lbs);
> +}
> +
> +/* static functions. */
> +static void
> +local_lport_init_cache(struct local_lport *lport)
> +{
> +    const struct sbrec_port_binding *pb = lport->pb;
> +    smap_clone(&lport->options, &pb->options);
> +
> +    lport->addresses =
> +        pb->n_mac ? xmalloc(pb->n_mac * sizeof *lport->addresses) :
> +        NULL;
> +
> +    lport->n_addresses = pb->n_mac;
> +    for (size_t i = 0; i < pb->n_mac; i++) {
> +        lport->addresses[i] = xstrdup(pb->mac[i]);
> +    }
> +
> +    lport->port_security =
> +        pb->n_port_security ?
> +        xmalloc(pb->n_port_security * sizeof *lport->port_security) :
> +        NULL;
> +
> +    lport->n_port_security = pb->n_port_security;
> +    for (size_t i = 0; i < pb->n_port_security; i++) {
> +        lport->port_security[i] = xstrdup(pb->port_security[i]);
> +    }
> +
> +    lport->claimed = !!pb->chassis;
> +
> +    local_lport_init_lflow_gen_data(lport);
> +}
> +
> +static struct local_datapath *
> +local_datapath_add__(struct hmap *local_datapaths,
> +                     const struct sbrec_datapath_binding *dp,
> +                     struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +                     struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +                     struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +                     int depth,
> +                     void (*datapath_added_cb)(
> +                           struct local_datapath *ld,
> +                           void *aux),
> +                     void *aux)
> +{
> +    uint32_t dp_key = dp->tunnel_key;
> +    struct local_datapath *ld = get_local_datapath(local_datapaths, dp_key);
> +    if (ld) {
> +        return ld;
> +    }
> +
> +    ld = local_datapath_alloc(dp);
> +    hmap_insert(local_datapaths, &ld->hmap_node, dp_key);
> +    ld->datapath = dp;
> +
> +    if (datapath_added_cb) {
> +        datapath_added_cb(ld, aux);
> +    }
> +
> +    if (depth >= 100) {
> +        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> +        VLOG_WARN_RL(&rl, "datapaths nested too deep");
> +        return ld;
> +    }
> +
> +    struct sbrec_port_binding *target =
> +        sbrec_port_binding_index_init_row(sbrec_port_binding_by_datapath);
> +    sbrec_port_binding_index_set_datapath(target, dp);
> +
> +    const struct sbrec_port_binding *pb;
> +    SBREC_PORT_BINDING_FOR_EACH_EQUAL (pb, target,
> +                                       sbrec_port_binding_by_datapath) {
> +        struct local_lport *lport =
> +            local_datapath_add_lport(ld, pb->logical_port, pb);
> +
> +        if (!strcmp(pb->type, "patch") || !strcmp(pb->type, "l3gateway")) {
> +            const char *peer_name = smap_get(&pb->options, "peer");
> +            if (peer_name) {
> +                const struct sbrec_port_binding *peer;
> +
> +                peer = lport_lookup_by_name(sbrec_port_binding_by_name,
> +                                            peer_name);
> +
> +                if (peer && peer->datapath) {
> +                    if (!strcmp(pb->type, "patch")) {
> +                        /* Add the datapath to local datapath only for patch
> +                         * ports. For l3gateway ports, since gateway router
> +                         * resides on one chassis, we don't need to add.
> +                         * Otherwise, all other chassis might create patch
> +                         * ports between br-int and the provider bridge. */
> +                        local_datapath_add__(local_datapaths, peer->datapath,
> +                                             sbrec_datapath_binding_by_key,
> +                                             sbrec_port_binding_by_datapath,
> +                                             sbrec_port_binding_by_name,
> +                                             depth + 1, datapath_added_cb,
> +                                             aux);
> +                    }
> +                    struct local_datapath *peer_ld =
> +                        get_local_datapath(local_datapaths, peer->datapath->tunnel_key);
> +                    if (peer_ld){
> +                        struct local_lport *peer_lport =
> +                            local_datapath_get_lport(peer_ld, peer->logical_port);
> +
> +                        if (peer_lport) {
> +                            ld->n_peer_ports++;
> +                            if (ld->n_peer_ports > ld->n_allocated_peer_ports) {
> +                                ld->peer_ports =
> +                                    x2nrealloc(ld->peer_ports,
> +                                            &ld->n_allocated_peer_ports,
> +                                            sizeof *ld->peer_ports);
> +                            }
> +
> +                            ld->peer_ports[ld->n_peer_ports - 1].local = lport;
> +                            ld->peer_ports[ld->n_peer_ports - 1].remote = peer_lport;
> +
> +                            lport->peer = peer_lport;
> +                            peer_lport->peer = lport;
> +                        }
> +                    }
> +                }
> +            }
> +        }
> +    }
> +    sbrec_port_binding_index_destroy_row(target);
> +    return ld;
> +}
> +
> +static struct tracked_datapath *
> +tracked_datapath_create(const struct sbrec_datapath_binding *dp,
> +                        enum en_tracked_resource_type tracked_type,
> +                        struct hmap *tracked_datapaths)
> +{
> +    struct tracked_datapath *t_dp = xzalloc(sizeof *t_dp);
> +    t_dp->dp = dp;
> +    t_dp->tracked_type = tracked_type;
> +    shash_init(&t_dp->lports);
> +    hmap_insert(tracked_datapaths, &t_dp->node, uuid_hash(&dp->header_.uuid));
> +    return t_dp;
> +}
> +
> +static void
> +local_lport_destroy_lsp_data(struct local_lport *lport)
> +{
> +    if (lport->lsp.n_addrs) {
> +        destroy_lport_addresses(lport->lsp.addrs);
> +    }
> +
> +    if (lport->lsp.n_ps_addrs){
> +        destroy_lport_addresses(lport->lsp.ps_addrs);
> +    }
> +
> +    free(lport->lsp.addrs);
> +    free(lport->lsp.ps_addrs);
> +    lport->lsp.addrs = NULL;
> +    lport->lsp.ps_addrs = NULL;
> +    lport->lsp.n_addrs = 0;
> +    lport->lsp.n_ps_addrs = 0;
> +}
> +
> +static void
> +local_lport_destroy_lrp_data(struct local_lport *lport)
> +{
> +    destroy_lport_addresses(&lport->lrp.networks);
> +    if (lport->lrp.is_l3dgw_port) {
> +        free(lport->lrp.chassis_redirect_json_key);
> +    }
> +}
> +
> +static void
> +local_lport_update_lsp_data(struct local_lport *lport)
> +{
> +    lport->lsp.addrs = xmalloc(sizeof *lport->lsp.addrs * lport->pb->n_mac);
> +    lport->lsp.ps_addrs =
> +        xmalloc(sizeof *lport->lsp.ps_addrs * lport->pb->n_mac);
> +    for (size_t i = 0; i < lport->pb->n_mac; i++) {
> +        if (!strcmp(lport->pb->mac[i], "unknown")) {
> +            lport->lsp.has_unknown = true;
> +            continue;
> +        }
> +        if (!strcmp(lport->pb->mac[i], "router")) {
> +            continue;
> +        }
> +
> +        if (!extract_lsp_addresses(lport->pb->mac[i],
> +                                   &lport->lsp.addrs[lport->lsp.n_addrs])) {
> +            continue;
> +        }
> +
> +        lport->lsp.n_addrs++;
> +    }
> +
> +    for (size_t i = 0; i < lport->pb->n_port_security; i++) {
> +        if (!extract_lsp_addresses(
> +            lport->pb->port_security[i],
> +            &lport->lsp.ps_addrs[lport->lsp.n_ps_addrs])) {
> +            continue;
> +        }
> +        lport->lsp.n_ps_addrs++;
> +    }
> +
> +    lport->lsp.check_lport_is_up =
> +        !smap_get_bool(&lport->pb->datapath->options,
> +        "ignore_lport_down", false);
> +}
> +
> +static void
> +local_lport_update_lrp_data(struct local_lport *lport)
> +{
> +    if (!extract_lsp_addresses(lport->pb->mac[0], &lport->lrp.networks)) {
> +        return;
> +    }
> +
> +    /* Always add the IPv6 link local address. */
> +    struct in6_addr lla;
> +    in6_generate_lla(lport->lrp.networks.ea, &lla);
> +    lport_addr_add_ip6ddr(&lport->lrp.networks, lla, 64);
> +
> +    lport->lrp.is_l3dgw_port = smap_get_bool(&lport->pb->options,
> +                                             "is-l3dgw-port", false);
> +    if (lport->lrp.is_l3dgw_port) {
> +        struct ds json_key = DS_EMPTY_INITIALIZER;
> +        char *chassis_redirect_name =
> +            ovn_chassis_redirect_name(lport->pb->logical_port);
> +        json_string_escape(chassis_redirect_name, &json_key);
> +        lport->lrp.chassis_redirect_json_key = ds_steal_cstr(&json_key);
> +        free(chassis_redirect_name);
> +    }
> +
> +    lport->lrp.dp_has_l3dgw_port = smap_get_bool(&lport->pb->datapath->options,
> +                                          "has-l3dgw-port", false);
> +
> +    lport->lrp.peer_dp_has_localnet_ports =
> +        smap_get_bool(&lport->pb->options,
> +                      "peer-dp-has-localnet-ports", false);
> +}
> +
> +static struct local_load_balancer *
> +local_load_balancer_add__(struct hmap *local_lbs,
> +                          const struct sbrec_load_balancer *sb_lb)
> +{
> +    struct local_load_balancer *local_lb =
> +        local_load_balancer_get(local_lbs, &sb_lb->header_.uuid);
> +    if (!local_lb) {
> +        struct ovn_controller_lb *ovn_lb = ovn_controller_lb_create(sb_lb);
> +        local_lb = xzalloc(sizeof *local_lb);
> +        local_lb->ovn_lb = ovn_lb;
> +        hmap_init(&local_lb->lswitch_lflows[0]);
> +        hmap_init(&local_lb->lswitch_lflows[1]);
> +        hmap_init(&local_lb->lrouter_lflows[0]);
> +        hmap_init(&local_lb->lrouter_lflows[1]);
> +
> +        local_lb->active_lswitch_lflows = &local_lb->lswitch_lflows[0];
> +        local_lb->cleared_lswitch_lflows = &local_lb->lswitch_lflows[1];
> +
> +        local_lb->active_lrouter_lflows = &local_lb->lrouter_lflows[0];
> +        local_lb->cleared_lrouter_lflows = &local_lb->lrouter_lflows[1];
> +
> +        hmap_insert(local_lbs, &local_lb->hmap_node,
> +                    uuid_hash(&sb_lb->header_.uuid));
> +    }
> +
> +    return local_lb;
> +}
> +
> +static void
> +local_load_balancer_destroy(struct local_load_balancer *local_lb)
> +{
> +    ovn_controller_lb_destroy(local_lb->ovn_lb);
> +    ovn_ctrl_lflows_destroy(&local_lb->lswitch_lflows[0]);
> +    ovn_ctrl_lflows_destroy(&local_lb->lswitch_lflows[1]);
> +    ovn_ctrl_lflows_destroy(&local_lb->lrouter_lflows[0]);
> +    ovn_ctrl_lflows_destroy(&local_lb->lrouter_lflows[1]);
> +    free(local_lb);
> +}
> diff --git a/lib/ldata.h b/lib/ldata.h
> new file mode 100644
> index 0000000000..d51b2ddaef
> --- /dev/null
> +++ b/lib/ldata.h
> @@ -0,0 +1,251 @@
> +/* Copyright (c) 2021, Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#ifndef LDATA_H
> +#define LDATA_H 1
> +
> +/* OVS includes. */
> +#include "include/openvswitch/shash.h"
> +#include "lib/smap.h"
> +
> +/* OVN includes. */
> +#include "lib/ovn-util.h"
> +
> +struct sbrec_datapath_binding;
> +struct sbrec_port_binding;
> +struct ovsdb_idl_index;
> +struct sbrec_load_balancer;
> +
> +struct local_lport {
> +    const struct sbrec_port_binding *pb;
> +    enum en_lport_type type;
> +
> +    /* cached data. */
> +    char **addresses;
> +    size_t n_addresses;
> +    char **port_security;
> +    size_t n_port_security;
> +    struct smap options;
> +    bool claimed;
> +
> +    union {
> +        struct {
> +            /* Logical switch port data. */
> +            struct lport_addresses *addrs;  /* Logical switch port
> +                                             * addresses. */
> +            unsigned int n_addrs;
> +
> +            struct lport_addresses *ps_addrs;  /* Port security addresses. */
> +            unsigned int n_ps_addrs;
> +
> +            bool has_unknown;
> +            bool check_lport_is_up;
> +        } lsp;
> +
> +        struct {
> +            struct lport_addresses networks;
> +            bool has_bfd;
> +            bool is_l3dgw_port;
> +            char *chassis_redirect_json_key; /* Initialized only if
> +                                              * 'is_l3dgw_port'. */
> +            bool dp_has_l3dgw_port; /* True if the router datapath has a
> +                                     * gw port. */
> +            bool peer_dp_has_localnet_ports; /* True if the peer datapath has
> +                                      * localnet ports. */
> +        } lrp;
> +    };
> +    char *json_key;
> +
> +    /* The port's peer:
> +     *
> +     *     - A switch port S of type "router" has a router port R as a peer,
> +     *       and R in turn has S has its peer.
> +     *
> +     *     - Two connected logical router ports have each other as peer.
> +     *
> +     *     - Other kinds of ports have no peer. */
> +    struct local_lport *peer;
> +
> +    /* Logical port multicast data. */
> +    /*struct mcast_port_info mcast_info; */
> +
> +    struct local_datapath *ldp;
> +
> +    struct hmap ctrl_lflows[2];
> +    struct hmap *active_lflows;
> +    struct hmap *cleared_lflows;
> +
> +};
> +
> +/* A logical datapath that has some relevance to this hypervisor.  A logical
> + * datapath D is relevant to hypervisor H if:
> + *
> + *     - Some VIF or l2gateway or l3gateway port in D is located on H.
> + *
> + *     - D is reachable over a series of hops across patch ports, starting from
> + *       a datapath relevant to H.
> + *
> + * The 'hmap_node''s hash value is 'datapath->tunnel_key'. */
> +struct local_datapath {
> +    struct hmap_node hmap_node;
> +    const struct sbrec_datapath_binding *datapath;
> +    bool is_switch;
> +
> +    /* The localnet port in this datapath, if any (at most one is allowed). */
> +    const struct sbrec_port_binding *localnet_port;
> +
> +    /* True if this datapath contains an l3gateway port located on this
> +     * hypervisor. */
> +    bool has_local_l3gateway;
> +
> +    struct {
> +        struct local_lport *local;
> +        struct local_lport *remote;
> +    } *peer_ports;
> +
> +    size_t n_peer_ports;
> +    size_t n_allocated_peer_ports;
> +
> +    /* Multicast data. */
> +    /*struct mcast_info mcast_info; */
> +
> +    /* Data related to lflow generation. */
> +    struct smap dp_options;
> +    struct hmap ctrl_lflows[2];
> +    struct hmap *active_lflows;
> +    struct hmap *cleared_lflows;
> +
> +    /* shash of 'struct local_lport'. */
> +    struct shash lports;
> +};
> +
> +struct local_datapath *local_datapath_alloc(
> +    const struct sbrec_datapath_binding *);
> +struct local_datapath *get_local_datapath(const struct hmap *,
> +                                          uint32_t tunnel_key);
> +void local_datapath_add(struct hmap *local_datapaths,
> +                        const struct sbrec_datapath_binding *,
> +                        struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +                        struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +                        struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +                        void (*datapath_added)(struct local_datapath *,
> +                                               void *aux),
> +                        void *aux);
> +
> +void local_datapaths_destroy(struct hmap *local_datapaths);
> +void local_datapath_destroy(struct local_datapath *ld);
> +void local_datapath_switch_lflow_map(struct local_datapath *);
> +
> +struct local_lport *local_datapath_get_lport(struct local_datapath *ld,
> +                                             const char *lport_name);
> +
> +struct local_lport *local_datapath_add_lport(
> +    struct local_datapath *ld, const char *lport_name,
> +    const struct sbrec_port_binding *);
> +
> +void local_datapath_remove_lport(struct local_datapath *ld,
> +                                 const char *lport_name);
> +
> +void local_datapath_add_or_update_peer_port(
> +    const struct sbrec_port_binding *pb,
> +    struct ovsdb_idl_index *sbrec_datapath_binding_by_key,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_datapath,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +    struct local_datapath *ld,
> +    struct hmap *local_datapaths,
> +    void (*datapath_added_cb)(
> +                         struct local_datapath *ld,
> +                         void *aux),
> +    void *aux);
> +
> +void local_datapath_remove_peer_port(const struct sbrec_port_binding *pb,
> +                                     struct local_datapath *ld,
> +                                     struct hmap *local_datapaths);
> +struct local_lport *local_datapath_unlink_lport(struct local_datapath *ld,
> +                                                const char *lport_name);
> +
> +void local_lport_destroy(struct local_lport *);
> +
> +bool local_lport_update_cache(struct local_lport *);
> +void local_lport_clear_cache(struct local_lport *);
> +bool local_lport_is_cache_old(struct local_lport *);
> +void local_lport_switch_lflow_map(struct local_lport *);
> +
> +/* Represents a tracked logical port. */
> +enum en_tracked_resource_type {
> +    TRACKED_RESOURCE_NEW,
> +    TRACKED_RESOURCE_REMOVED,
> +    TRACKED_RESOURCE_UPDATED
> +};
> +
> +struct tracked_lport {
> +    const struct sbrec_port_binding *pb;
> +    enum en_tracked_resource_type tracked_type;
> +};
> +
> +/* Represent a tracked datapath. */
> +struct tracked_datapath {
> +    struct hmap_node node;
> +    const struct sbrec_datapath_binding *dp;
> +    enum en_tracked_resource_type tracked_type;
> +    struct shash lports; /* shash of struct tracked_binding_lport. */
> +};
> +
> +struct tracked_datapath * tracked_datapath_add(
> +    const struct sbrec_datapath_binding *, enum en_tracked_resource_type,
> +    struct hmap *tracked_datapaths);
> +struct tracked_datapath *tracked_datapath_find(
> +    struct hmap *tracked_datapaths, const struct sbrec_datapath_binding *);
> +void tracked_datapath_lport_add(const struct sbrec_port_binding *,
> +                                enum en_tracked_resource_type,
> +                                struct hmap *tracked_datapaths);
> +void tracked_datapaths_destroy(struct hmap *tracked_datapaths);
> +
> +/* Load balancer. */
> +struct local_load_balancer {
> +    struct hmap_node hmap_node;
> +
> +    struct ovn_controller_lb *ovn_lb;
> +    struct hmap lswitch_lflows[2];
> +    struct hmap lrouter_lflows[2];
> +    struct hmap *active_lswitch_lflows;
> +    struct hmap *cleared_lswitch_lflows;
> +    struct hmap *active_lrouter_lflows;
> +    struct hmap *cleared_lrouter_lflows;
> +};
> +
> +struct local_load_balancer *local_load_balancer_add(
> +    struct hmap *local_lbs, struct hmap *local_datapaths,
> +    const struct sbrec_load_balancer *);
> +void local_load_balancer_remove(struct hmap *local_lbs,
> +                                struct local_load_balancer *);
> +void local_load_balancers_destroy(struct hmap *local_lbs);
> +struct local_load_balancer *local_load_balancer_get(struct hmap *local_lbs,
> +                                                    const struct uuid *);
> +void local_load_balancer_update(struct local_load_balancer *);
> +void local_load_balancer_switch_lflow_map(struct local_load_balancer *);
> +
> +struct tracked_lb {
> +    struct hmap_node node;
> +    struct local_load_balancer *local_lb;
> +    enum en_tracked_resource_type tracked_type;
> +};
> +
> +void tracked_lb_add(struct local_load_balancer *,
> +                    enum en_tracked_resource_type,
> +                    struct hmap *tracked_lbs);
> +void tracked_lbs_destroy(struct hmap *tracked_lbs);
> +
> +#endif /* controller/ldata.h */
> diff --git a/lib/lflow.c b/lib/lflow.c
> new file mode 100644
> index 0000000000..b2af57adc7
> --- /dev/null
> +++ b/lib/lflow.c
> @@ -0,0 +1,3514 @@
> +/*
> + * Copyright (c) 2021 Red Hat.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#include <config.h>
> +
> +#include "ovn/expr.h"
> +
> +#include "lflow.h"
> +#include "lib/lb.h"
> +#include "lib/ldata.h"
> +#include "lib/ovn-nb-idl.h"
> +#include "lib/ovn-sb-idl.h"
> +#include "lib/ovn-l7.h"
> +#include "lib/ovn-util.h"
> +
> +/* OpenvSwitch lib includes. */
> +#include "openvswitch/vlog.h"
> +#include "openvswitch/hmap.h"
> +#include "include/openvswitch/json.h"
> +#include "lib/smap.h"
> +
> +VLOG_DEFINE_THIS_MODULE(lib_lflow);
> +
> +static char *ovn_ctrl_lflow_hint(const struct ovsdb_idl_row *row);
> +static void ovn_ctrl_lflow_init(struct ovn_ctrl_lflow *lflow, uint32_t dp_key,
> +                                enum ovn_stage stage, uint16_t priority,
> +                                char *match, char *actions,
> +                                const struct uuid *lflow_uuid,
> +                                uint32_t lflow_idx,
> +                                char *stage_hint, const char *where);
> +static void ovn_ctrl_lflow_add_at(struct hmap *lflow_map, uint32_t dp_key,
> +                                  enum ovn_stage stage,
> +                                  uint16_t priority, const char *match,
> +                                  const char *actions,
> +                                  const struct uuid *lflow_uuid,
> +                                  uint32_t lflow_idx,
> +                                  const struct ovsdb_idl_row *stage_hint,
> +                                  const char *where);
> +static void ovn_ctrl_lflow_destroy(struct ovn_ctrl_lflow *lflow);
> +
> +
> +#define ovn_ctrl_lflow_add(LFLOW_MAP, STAGE, PRIORITY, MATCH, ACTIONS) \
> +    ovn_ctrl_lflow_add_at(LFLOW_MAP, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> +                          NULL, 0, NULL, OVS_SOURCE_LOCATOR)
> +
> +#define ovn_ctrl_lflow_add_dp_key(LFLOW_MAP, DP_KEY, STAGE, PRIORITY, MATCH, ACTIONS, UUID, LFLOW_IDX) \
> +    ovn_ctrl_lflow_add_at(LFLOW_MAP, DP_KEY, STAGE, PRIORITY, MATCH, ACTIONS, \
> +                          UUID, *LFLOW_IDX, NULL, OVS_SOURCE_LOCATOR); \
> +    (*LFLOW_IDX)++
> +
> +#define ovn_ctrl_lflow_add_uuid(LFLOW_MAP, STAGE, PRIORITY, MATCH, ACTIONS, UUID, LFLOW_IDX) \
> +    ovn_ctrl_lflow_add_at(LFLOW_MAP, 0, STAGE, PRIORITY, MATCH, ACTIONS, \
> +                          UUID, *LFLOW_IDX, NULL, OVS_SOURCE_LOCATOR); \
> +    (*LFLOW_IDX)++
> +
> +static void build_generic_port_security(struct hmap *lflows);
> +static void build_generic_pre_acl(struct hmap *lflows);
> +static void build_generic_pre_lb(struct hmap *lflows);
> +static void build_generic_pre_stateful(struct hmap *lflows);
> +static void build_generic_acls(struct hmap *lflows);
> +static void build_generic_qos(struct hmap *lflows);
> +static void build_generic_stateful(struct hmap *lflows);
> +static void build_generic_lb_hairpin(struct hmap *lflows);
> +static void build_generic_l2_lkup(struct hmap *lflows);
> +
> +static void build_lswitch_dp_lflows(struct hmap *lflows,
> +                                    struct local_datapath *,
> +                                    bool use_ct_inv_match);
> +static void build_lrouter_dp_lflows(struct hmap *lflows,
> +                                    const struct sbrec_datapath_binding *dp);
> +
> +static void build_lswitch_port_lflows(struct hmap *lflows,
> +                                      struct local_lport *);
> +static void build_lrouter_port_lflows(struct hmap *lflows,
> +                                      struct local_lport *);
> +
> +static void skip_lport_from_conntrack(struct hmap *lflows,
> +                                      struct local_lport *,
> +                                      uint32_t *lflow_uuid_idx,
> +                                      enum ovn_stage in_stage,
> +                                      enum ovn_stage out_stage,
> +                                      uint16_t priority, struct ds *match);
> +static void ovn_ctrl_build_lb_lswitch_lflows(struct hmap *lswitch_lflows,
> +                                             struct ovn_controller_lb *);
> +static void ovn_ctrl_build_lb_lrouter_lflows(struct hmap *lrouter_lflows,
> +                                             struct ovn_controller_lb *);
> +
> +void
> +ovn_ctrl_lflows_clear(struct hmap *lflows)
> +{
> +    struct ovn_ctrl_lflow *lflow;
> +    HMAP_FOR_EACH_POP (lflow, hmap_node, lflows) {
> +        ovn_ctrl_lflow_destroy(lflow);
> +    }
> +}
> +
> +void
> +ovn_ctrl_lflows_destroy(struct hmap *lflows)
> +{
> +    ovn_ctrl_lflows_clear(lflows);
> +    hmap_destroy(lflows);
> +}
> +
> +size_t
> +ovn_ctrl_lflow_hash(const struct ovn_ctrl_lflow *lflow)
> +{
> +    return ovn_logical_flow_hash(ovn_stage_get_table(lflow->stage),
> +                                 ovn_stage_get_pipeline_name(lflow->stage),
> +                                 lflow->priority, lflow->match,
> +                                 lflow->actions);
> +}
> +
> +void
> +build_lswitch_generic_lflows(struct hmap *lflows)
> +{
> +    /* Port security stages. */
> +    build_generic_port_security(lflows);
> +
> +    /* Lookup and learn FDB. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PUT_FDB, 0, "1", "next;");
> +
> +    build_generic_pre_acl(lflows);
> +    build_generic_pre_lb(lflows);
> +    build_generic_pre_stateful(lflows);
> +    build_generic_acls(lflows);
> +    build_generic_qos(lflows);
> +    build_generic_stateful(lflows);
> +    build_generic_lb_hairpin(lflows);
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_DHCP_RESPONSE, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_DNS_LOOKUP, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_DNS_RESPONSE, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_EXTERNAL_PORT, 0, "1", "next;");
> +
> +    build_generic_l2_lkup(lflows);
> +}
> +
> +void
> +ovn_ctrl_lflows_build_dp_lflows(struct hmap *lflows,
> +                                struct local_datapath *ldp)
> +{
> +    if (ldp->is_switch) {
> +        build_lswitch_dp_lflows(lflows, ldp, true);
> +    } else {
> +        build_lrouter_dp_lflows(lflows, ldp->datapath);
> +    }
> +}
> +
> +void
> +ovn_ctrl_build_lport_lflows(struct hmap *lflows, struct local_lport *op)
> +{
> +    if (op->ldp->is_switch) {
> +        build_lswitch_port_lflows(lflows, op);
> +    } else {
> +        build_lrouter_port_lflows(lflows, op);
> +    }
> +}
> +
> +void ovn_ctrl_build_lb_lflows(struct hmap *lswitch_lflows,
> +                              struct hmap *lrouter_lflows,
> +                              struct ovn_controller_lb *ovn_lb)
> +{
> +    ovn_ctrl_build_lb_lswitch_lflows(lswitch_lflows, ovn_lb);
> +    ovn_ctrl_build_lb_lrouter_lflows(lrouter_lflows, ovn_lb);
> +}
> +
> +/* static functions. */
> +static char *
> +ovn_ctrl_lflow_hint(const struct ovsdb_idl_row *row)
> +{
> +    if (!row) {
> +        return NULL;
> +    }
> +    return xasprintf("%08x", row->uuid.parts[0]);
> +}
> +
> +static void
> +ovn_ctrl_lflow_init(struct ovn_ctrl_lflow *lflow, uint32_t dp_key,
> +                    enum ovn_stage stage, uint16_t priority,
> +                    char *match, char *actions,
> +                    const struct uuid *lflow_uuid, uint32_t lflow_idx,
> +                    char *stage_hint,
> +                    const char *where)
> +{
> +    lflow->stage = stage;
> +    lflow->priority = priority;
> +    lflow->match = match;
> +    lflow->actions = actions;
> +    lflow->stage_hint = stage_hint;
> +    lflow->where = where;
> +    lflow->dp_key = dp_key;
> +    //lflow_uuid = NULL;
> +    if (lflow_uuid) {
> +        lflow->uuid_ = *lflow_uuid;
> +        lflow->uuid_.parts[3] = lflow_idx;
> +    } else {
> +        uuid_generate(&lflow->uuid_);
> +    }
> +}
> +
> +/* Adds a row with the specified contents to the Logical_Flow table. */
> +static void
> +ovn_ctrl_lflow_add_at(struct hmap *lflow_map, uint32_t dp_key,
> +                      enum ovn_stage stage,
> +                      uint16_t priority, const char *match,
> +                      const char *actions,
> +                      const struct uuid *lflow_uuid,
> +                      uint32_t lflow_idx,
> +                      const struct ovsdb_idl_row *stage_hint,
> +                      const char *where)
> +{
> +    struct ovn_ctrl_lflow *lflow;
> +    size_t hash;
> +
> +    lflow = xzalloc(sizeof *lflow);
> +    ovn_ctrl_lflow_init(lflow, dp_key, stage, priority,
> +                        xstrdup(match), xstrdup(actions),
> +                        lflow_uuid, lflow_idx,
> +                        ovn_ctrl_lflow_hint(stage_hint), where);
> +
> +    hash = ovn_ctrl_lflow_hash(lflow);
> +    hmap_insert(lflow_map, &lflow->hmap_node, hash);
> +}
> +
> +static void
> +ovn_ctrl_lflow_destroy(struct ovn_ctrl_lflow *lflow)
> +{
> +    if (lflow) {
> +        free(lflow->match);
> +        free(lflow->actions);
> +        free(lflow->stage_hint);
> +        free(lflow);
> +    }
> +}
> +
> +static void
> +build_generic_port_security(struct hmap *lflows)
> +{
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PORT_SEC_L2, 100, "eth.src[40]",
> +                       "drop;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PORT_SEC_ND, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PORT_SEC_IP, 0, "1", "next;");
> +
> +    /* Egress tables 8: Egress port security - IP (priority 0)
> +     * Egress table 9: Egress port security L2 - multicast/broadcast
> +     *                 (priority 100). */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PORT_SEC_IP, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PORT_SEC_L2, 100, "eth.mcast",
> +                          "output;");
> +}
> +
> +static void
> +build_generic_pre_acl(struct hmap *lflows)
> +{
> +    /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
> +     * allowed by default. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_ACL, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_ACL, 0, "1", "next;");
> +
> +#if 0
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_ACL, 110,
> +                          "eth.dst == $svc_monitor_mac", "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_ACL, 110,
> +                          "eth.src == $svc_monitor_mac", "next;");
> +#endif
> +}
> +
> +static void
> +build_generic_pre_lb(struct hmap *lflows)
> +{
> +    /* Do not send ND packets to conntrack */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_LB, 110,
> +                  "nd || nd_rs || nd_ra || mldv1 || mldv2",
> +                  "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_LB, 110,
> +                  "nd || nd_rs || nd_ra || mldv1 || mldv2",
> +                  "next;");
> +
> +    /* Do not send service monitor packets to conntrack. */
> +#if 0
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_LB, 110,
> +                       "eth.dst == $svc_monitor_mac", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_LB, 110,
> +                      "eth.src == $svc_monitor_mac", "next;");
> +#endif
> +
> +    /* Allow all packets to go to next tables by default. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_LB, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_LB, 0, "1", "next;");
> +}
> +
> +static void
> +build_generic_pre_stateful(struct hmap *lflows)
> +{
> +    /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
> +     * allowed by default. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_STATEFUL, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_STATEFUL, 0, "1", "next;");
> +
> +    const char *lb_protocols[] = {"tcp", "udp", "sctp"};
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +
> +    for (size_t i = 0; i < ARRAY_SIZE(lb_protocols); i++) {
> +        ds_clear(&match);
> +        ds_clear(&actions);
> +        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip4 && %s",
> +                      lb_protocols[i]);
> +        ds_put_format(&actions, REG_ORIG_DIP_IPV4 " = ip4.dst; "
> +                                REG_ORIG_TP_DPORT " = %s.dst; ct_lb;",
> +                      lb_protocols[i]);
> +        ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_STATEFUL, 120,
> +                      ds_cstr(&match), ds_cstr(&actions));
> +
> +        ds_clear(&match);
> +        ds_clear(&actions);
> +        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip6 && %s",
> +                      lb_protocols[i]);
> +        ds_put_format(&actions, REG_ORIG_DIP_IPV6 " = ip6.dst; "
> +                                REG_ORIG_TP_DPORT " = %s.dst; ct_lb;",
> +                      lb_protocols[i]);
> +        ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_STATEFUL, 120,
> +                      ds_cstr(&match), ds_cstr(&actions));
> +    }
> +
> +    ds_destroy(&actions);
> +    ds_destroy(&match);
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_STATEFUL, 110,
> +                       REGBIT_CONNTRACK_NAT" == 1", "ct_lb;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_STATEFUL, 110,
> +                       REGBIT_CONNTRACK_NAT" == 1", "ct_lb;");
> +
> +    /* If REGBIT_CONNTRACK_DEFRAG is set as 1, then the packets should be
> +     * sent to conntrack for tracking and defragmentation. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_STATEFUL, 100,
> +                       REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_PRE_STATEFUL, 100,
> +                       REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
> +}
> +
> +static void
> +build_generic_acls(struct hmap *lflows)
> +{
> +    /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by
> +     * default.  A related rule at priority 1 is added below if there
> +     * are any stateful ACLs in this datapath. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_ACL, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_ACL, 0, "1", "next;");
> +
> +#if 0
> +    /* Add a 34000 priority flow to advance the service monitor reply
> +     * packets to skip applying ingress ACLs. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_ACL, 34000,
> +                          "eth.dst == $svc_monitor_mac", "next;");
> +
> +    /* Add a 34000 priority flow to advance the service monitor packets
> +     * generated by ovn-controller to skip applying egress ACLs. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_ACL, 34000,
> +                          "eth.src == $svc_monitor_mac", "next;");
> +#endif
> +}
> +
> +static void
> +build_generic_qos(struct hmap *lflows)
> +{
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_QOS_MARK, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_QOS_METER, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_QOS_METER, 0, "1", "next;");
> +}
> +
> +static void
> +build_generic_stateful(struct hmap *lflows)
> +{
> +    /* Ingress and Egress stateful Table (Priority 0): Packets are
> +     * allowed by default. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_STATEFUL, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_STATEFUL, 0, "1", "next;");
> +
> +    /* If REGBIT_CONNTRACK_COMMIT is set as 1, then the packets should be
> +     * committed to conntrack. We always set ct_label.blocked to 0 here as
> +     * any packet that makes it this far is part of a connection we
> +     * want to allow to continue. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_STATEFUL, 100,
> +                       REGBIT_CONNTRACK_COMMIT" == 1",
> +                       "ct_commit { ct_label.blocked = 0; }; next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_STATEFUL, 100,
> +                       REGBIT_CONNTRACK_COMMIT" == 1",
> +                       "ct_commit { ct_label.blocked = 0; }; next;");
> +}
> +
> +static void
> +build_generic_lb_hairpin(struct hmap *lflows)
> +{
> +    /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
> +     * Packets that don't need hairpinning should continue processing.
> +     */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_PRE_HAIRPIN, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_NAT_HAIRPIN, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_HAIRPIN, 0, "1", "next;");
> +}
> +
> +static void
> +build_generic_l2_lkup(struct hmap *lflows)
> +{
> +#if 0
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_L2_LKUP, 110,
> +                          "eth.dst == $svc_monitor_mac",
> +                          "handle_svc_check(inport);");
> +#endif
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_L2_LKUP, 0, "1",
> +                          "outport = get_fdb(eth.dst); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_L2_UNKNOWN, 0, "1", "output;");
> +}
> +
> +static bool
> +is_dp_vlan_transparent(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "vlan-passthru", false);
> +}
> +
> +static bool
> +has_dp_lb_vip(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "has-lb-vips", false);
> +}
> +
> +static bool
> +has_dp_stateful_acls(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "has-stateful-acls", false);
> +}
> +
> +static bool
> +has_dp_acls(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "has-acls", false);
> +}
> +
> +static bool
> +has_dp_unknown_lports(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "has-unknown", false);
> +}
> +
> +static bool
> +has_dp_dns_records(const struct sbrec_datapath_binding *dp)
> +{
> +    return smap_get_bool(&dp->options, "has-dns-records", false);
> +}
> +
> +static void
> +build_lswitch_pre_acls(struct hmap *lflows, bool has_stateful_acls,
> +                       const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx)
> +{
> +    /* If there are any stateful ACL rules in this datapath, we may
> +     * send IP packets for some (allow) filters through the conntrack action,
> +     * which handles defragmentation, in order to match L4 headers. */
> +    if (has_stateful_acls) {
> +        /* Ingress and Egress Pre-ACL Table (Priority 110).
> +         *
> +         * Not to do conntrack on ND and ICMP destination
> +         * unreachable packets. */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_IN_PRE_ACL, 110,
> +            "nd || nd_rs || nd_ra || mldv1 || mldv2 || "
> +            "(udp && udp.src == 546 && udp.dst == 547)", "next;",
> +            lflow_uuid, lflow_uuid_idx);
> +
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_OUT_PRE_ACL, 110,
> +            "nd || nd_rs || nd_ra || mldv1 || mldv2 || "
> +            "(udp && udp.src == 546 && udp.dst == 547)", "next;",
> +            lflow_uuid, lflow_uuid_idx);
> +
> +        /* Ingress and Egress Pre-ACL Table (Priority 100).
> +         *
> +         * Regardless of whether the ACL is "from-lport" or "to-lport",
> +         * we need rules in both the ingress and egress table, because
> +         * the return traffic needs to be followed.
> +         *
> +         * 'REGBIT_CONNTRACK_DEFRAG' is set to let the pre-stateful table send
> +         * it to conntrack for tracking and defragmentation. */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PRE_ACL, 100, "ip",
> +                                REGBIT_CONNTRACK_DEFRAG" = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_PRE_ACL, 100, "ip",
> +                                REGBIT_CONNTRACK_DEFRAG" = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_pre_lb(struct hmap *lflows, bool vip_configured,
> +                     const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx)
> +{
> +    /* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send
> +     * packet to conntrack for defragmentation and possibly for unNATting.
> +     *
> +     * Send all the packets to conntrack in the ingress pipeline if the
> +     * logical switch has a load balancer with VIP configured. Earlier
> +     * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress pipeline
> +     * if the IP destination matches the VIP. But this causes few issues when
> +     * a logical switch has no ACLs configured with allow-related.
> +     * To understand the issue, lets a take a TCP load balancer -
> +     * 10.0.0.10:80=10.0.0.3:80.
> +     * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection with
> +     * the VIP - 10.0.0.10, then the packet in the ingress pipeline of 'p1'
> +     * is sent to the p1's conntrack zone id and the packet is load balanced
> +     * to the backend - 10.0.0.3. For the reply packet from the backend lport,
> +     * it is not sent to the conntrack of backend lport's zone id. This is fine
> +     * as long as the packet is valid. Suppose the backend lport sends an
> +     *  invalid TCP packet (like incorrect sequence number), the packet gets
> +     * delivered to the lport 'p1' without unDNATing the packet to the
> +     * VIP - 10.0.0.10. And this causes the connection to be reset by the
> +     * lport p1's VIF.
> +     *
> +     * We can't fix this issue by adding a logical flow to drop ct.inv packets
> +     * in the egress pipeline since it will drop all other connections not
> +     * destined to the load balancers.
> +     *
> +     * To fix this issue, we send all the packets to the conntrack in the
> +     * ingress pipeline if a load balancer is configured. We can now
> +     * add a lflow to drop ct.inv packets.
> +     */
> +    if (vip_configured) {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PRE_LB,
> +                                100, "ip", REGBIT_CONNTRACK_NAT" = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_PRE_LB,
> +                                100, "ip", REGBIT_CONNTRACK_NAT" = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_acl_hints(struct hmap *lflows, bool has_acls_or_lbs,
> +                        const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx)
> +{
> +    /* This stage builds hints for the IN/OUT_ACL stage. Based on various
> +     * combinations of ct flags packets may hit only a subset of the logical
> +     * flows in the IN/OUT_ACL stage.
> +     *
> +     * Populating ACL hints first and storing them in registers simplifies
> +     * the logical flow match expressions in the IN/OUT_ACL stage and
> +     * generates less openflows.
> +     *
> +     * Certain combinations of ct flags might be valid matches for multiple
> +     * types of ACL logical flows (e.g., allow/drop). In such cases hints
> +     * corresponding to all potential matches are set.
> +     */
> +
> +    enum ovn_stage stages[] = {
> +        S_SWITCH_IN_ACL_HINT,
> +        S_SWITCH_OUT_ACL_HINT,
> +    };
> +
> +    for (size_t i = 0; i < ARRAY_SIZE(stages); i++) {
> +        enum ovn_stage stage = stages[i];
> +
> +        /* In any case, advance to the next stage. */
> +        if (!has_acls_or_lbs) {
> +            ovn_ctrl_lflow_add_uuid(lflows, stage, UINT16_MAX, "1", "next;",
> +                                    lflow_uuid, lflow_uuid_idx);
> +        } else {
> +            ovn_ctrl_lflow_add_uuid(lflows, stage, 0, "1", "next;",
> +                                    lflow_uuid, lflow_uuid_idx);
> +        }
> +
> +        if (!has_acls_or_lbs) {
> +            continue;
> +        }
> +
> +        /* New, not already established connections, may hit either allow
> +         * or drop ACLs. For allow ACLs, the connection must also be committed
> +         * to conntrack so we set REGBIT_ACL_HINT_ALLOW_NEW.
> +         */
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 7, "ct.new && !ct.est",
> +                                REGBIT_ACL_HINT_ALLOW_NEW " = 1; "
> +                                REGBIT_ACL_HINT_DROP " = 1; "
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +
> +        /* Already established connections in the "request" direction that
> +         * are already marked as "blocked" may hit either:
> +         * - allow ACLs for connections that were previously allowed by a
> +         *   policy that was deleted and is being readded now. In this case
> +         *   the connection should be recommitted so we set
> +         *   REGBIT_ACL_HINT_ALLOW_NEW.
> +         * - drop ACLs.
> +         */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, stage, 6,
> +            "!ct.new && ct.est && !ct.rpl && ct_label.blocked == 1",
> +            REGBIT_ACL_HINT_ALLOW_NEW " = 1; "REGBIT_ACL_HINT_DROP " = 1; "
> +            "next;", lflow_uuid, lflow_uuid_idx);
> +
> +        /* Not tracked traffic can either be allowed or dropped. */
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 5, "!ct.trk",
> +                                REGBIT_ACL_HINT_ALLOW " = 1; "
> +                                REGBIT_ACL_HINT_DROP " = 1; "
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +
> +        /* Already established connections in the "request" direction may hit
> +         * either:
> +         * - allow ACLs in which case the traffic should be allowed so we set
> +         *   REGBIT_ACL_HINT_ALLOW.
> +         * - drop ACLs in which case the traffic should be blocked and the
> +         *   connection must be committed with ct_label.blocked set so we set
> +         *   REGBIT_ACL_HINT_BLOCK.
> +         */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, stage, 4,
> +            "!ct.new && ct.est && !ct.rpl && ct_label.blocked == 0",
> +            REGBIT_ACL_HINT_ALLOW " = 1; "REGBIT_ACL_HINT_BLOCK " = 1; "
> +            "next;", lflow_uuid, lflow_uuid_idx);
> +
> +        /* Not established or established and already blocked connections may
> +         * hit drop ACLs.
> +         */
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 3, "!ct.est",
> +                                REGBIT_ACL_HINT_DROP " = 1; "
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 2,
> +                                "ct.est && ct_label.blocked == 1",
> +                                REGBIT_ACL_HINT_DROP " = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +
> +        /* Established connections that were previously allowed might hit
> +         * drop ACLs in which case the connection must be committed with
> +         * ct_label.blocked set.
> +         */
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 1,
> +                                "ct.est && ct_label.blocked == 0",
> +                                REGBIT_ACL_HINT_BLOCK " = 1; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_acls(struct hmap *lflows, bool has_acls_or_lbs,
> +                   bool has_stateful, bool use_ct_inv_match,
> +                   const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx)
> +{
> +    /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by
> +     * default.  If the logical switch has no ACLs or no load balancers,
> +     * then add 65535-priority flow to advance the packet to next
> +     * stage.
> +     *
> +     * A related rule at priority 1 is added below if there
> +     * are any stateful ACLs in this datapath. */
> +    if (!has_acls_or_lbs) {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, UINT16_MAX, "1",
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_ACL, UINT16_MAX, "1",
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +    } else {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, 0, "1",
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_ACL, 0, "1",
> +                                "next;", lflow_uuid, lflow_uuid_idx);
> +    }
> +
> +    if (has_stateful) {
> +        /* Ingress and Egress ACL Table (Priority 1).
> +         *
> +         * By default, traffic is allowed.  This is partially handled by
> +         * the Priority 0 ACL flows added earlier, but we also need to
> +         * commit IP flows.  This is because, while the initiater's
> +         * direction may not have any stateful rules, the server's may
> +         * and then its return traffic would not have an associated
> +         * conntrack entry and would return "+invalid".
> +         *
> +         * We use "ct_commit" for a connection that is not already known
> +         * by the connection tracker.  Once a connection is committed,
> +         * subsequent packets will hit the flow at priority 0 that just
> +         * uses "next;"
> +         *
> +         * We also check for established connections that have ct_label.blocked
> +         * set on them.  That's a connection that was disallowed, but is
> +         * now allowed by policy again since it hit this default-allow flow.
> +         * We need to set ct_label.blocked=0 to let the connection continue,
> +         * which will be done by ct_commit() in the "stateful" stage.
> +         * Subsequent packets will hit the flow at priority 0 that just
> +         * uses "next;". */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, 1,
> +                            "ip && (!ct.est || "
> +                            "(ct.est && ct_label.blocked == 1))",
> +                            REGBIT_CONNTRACK_COMMIT" = 1; next;",
> +                            lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_OUT_ACL, 1,
> +            "ip && (!ct.est || (ct.est && ct_label.blocked == 1))",
> +            REGBIT_CONNTRACK_COMMIT" = 1; next;",
> +            lflow_uuid, lflow_uuid_idx);
> +
> +        /* Ingress and Egress ACL Table (Priority 65532).
> +         *
> +         * Always drop traffic that's in an invalid state.  Also drop
> +         * reply direction packets for connections that have been marked
> +         * for deletion (bit 0 of ct_label is set).
> +         *
> +         * This is enforced at a higher priority than ACLs can be defined. */
> +        char *match = xasprintf("%s(ct.est && ct.rpl && ct_label.blocked == 1)",
> +                                use_ct_inv_match ? "ct.inv || " : "");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, UINT16_MAX - 3, match,
> +                            "drop;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_ACL, UINT16_MAX - 3, match,
> +                            "drop;", lflow_uuid, lflow_uuid_idx);
> +        free(match);
> +
> +        /* Ingress and Egress ACL Table (Priority 65535 - 3).
> +         *
> +         * Allow reply traffic that is part of an established
> +         * conntrack entry that has not been marked for deletion
> +         * (bit 0 of ct_label).  We only match traffic in the
> +         * reply direction because we want traffic in the request
> +         * direction to hit the currently defined policy from ACLs.
> +         *
> +         * This is enforced at a higher priority than ACLs can be defined. */
> +        match = xasprintf("ct.est && !ct.rel && !ct.new%s && "
> +                          "ct.rpl && ct_label.blocked == 0",
> +                          use_ct_inv_match ? " && !ct.inv" : "");
> +
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> +                                match, "next;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> +                                match, "next;", lflow_uuid, lflow_uuid_idx);
> +        free(match);
> +
> +        /* Ingress and Egress ACL Table (Priority 65535).
> +         *
> +         * Allow traffic that is related to an existing conntrack entry that
> +         * has not been marked for deletion (bit 0 of ct_label).
> +         *
> +         * This is enforced at a higher priority than ACLs can be defined.
> +         *
> +         * NOTE: This does not support related data sessions (eg,
> +         * a dynamically negotiated FTP data channel), but will allow
> +         * related traffic such as an ICMP Port Unreachable through
> +         * that's generated from a non-listening UDP port.  */
> +        match = xasprintf("!ct.est && ct.rel && !ct.new%s && "
> +                          "ct_label.blocked == 0",
> +                          use_ct_inv_match ? " && !ct.inv" : "");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> +                                match, "next;", lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> +                                match, "next;", lflow_uuid, lflow_uuid_idx);
> +        free(match);
> +
> +        /* Ingress and Egress ACL Table (Priority 65532).
> +         *
> +         * Not to do conntrack on ND packets. */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> +            "nd || nd_ra || nd_rs || mldv1 || mldv2", "next;",
> +            lflow_uuid, lflow_uuid_idx);
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> +            "nd || nd_ra || nd_rs || mldv1 || mldv2", "next;",
> +            lflow_uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_lb_hairpin(struct hmap *lflows, bool has_lb_vips,
> +                         const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx)
> +{
> +    if (has_lb_vips) {
> +        /* Check if the packet needs to be hairpinned.
> +         * Set REGBIT_HAIRPIN in the original direction and
> +         * REGBIT_HAIRPIN_REPLY in the reply direction.
> +         */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_IN_PRE_HAIRPIN, 100, "ip && ct.trk",
> +            REGBIT_HAIRPIN " = chk_lb_hairpin(); "
> +            REGBIT_HAIRPIN_REPLY " = chk_lb_hairpin_reply(); "
> +            "next;", lflow_uuid, lflow_uuid_idx);
> +
> +        /* If packet needs to be hairpinned, snat the src ip with the VIP
> +         * for new sessions. */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_NAT_HAIRPIN, 100,
> +                                "ip && ct.new && ct.trk"
> +                                " && "REGBIT_HAIRPIN " == 1",
> +                                "ct_snat_to_vip; next;",
> +                                lflow_uuid, lflow_uuid_idx);
> +
> +        /* If packet needs to be hairpinned, for established sessions there
> +         * should already be an SNAT conntrack entry.
> +         */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_NAT_HAIRPIN, 100,
> +                                "ip && ct.est && ct.trk"
> +                                " && "REGBIT_HAIRPIN " == 1",
> +                                "ct_snat;",
> +                                lflow_uuid, lflow_uuid_idx);
> +
> +        /* For the reply of hairpinned traffic, snat the src ip to the VIP. */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_NAT_HAIRPIN, 90,
> +                                "ip && "REGBIT_HAIRPIN_REPLY " == 1",
> +                                "ct_snat;",
> +                                lflow_uuid, lflow_uuid_idx);
> +
> +        /* Ingress Hairpin table.
> +        * - Priority 1: Packets that were SNAT-ed for hairpinning should be
> +        *   looped back (i.e., swap ETH addresses and send back on inport).
> +        */
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_IN_HAIRPIN, 1,
> +            "("REGBIT_HAIRPIN " == 1 || " REGBIT_HAIRPIN_REPLY " == 1)",
> +            "eth.dst <-> eth.src; outport = inport; flags.loopback = 1; "
> +            "output;", lflow_uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_pre_acls_and_acls(struct hmap *lflows,
> +                                const struct sbrec_datapath_binding *dp,
> +                                bool use_ct_inv_match,
> +                                const struct uuid *lflow_uuid,
> +                                uint32_t *lflow_uuid_idx)
> +{
> +    bool has_stateful_acls = has_dp_stateful_acls(dp);
> +    bool has_lb_vips = has_dp_lb_vip(dp);
> +    bool has_stateful = (has_stateful_acls || has_lb_vips);
> +    bool has_acls_or_lbs = has_dp_acls(dp) || has_lb_vips;
> +
> +    build_lswitch_pre_acls(lflows, has_stateful_acls, lflow_uuid,
> +                           lflow_uuid_idx);
> +    build_lswitch_pre_lb(lflows, has_lb_vips, lflow_uuid, lflow_uuid_idx);
> +    build_lswitch_acl_hints(lflows, has_acls_or_lbs, lflow_uuid,
> +                            lflow_uuid_idx);
> +    build_lswitch_acls(lflows, has_acls_or_lbs, has_stateful,
> +                       use_ct_inv_match, lflow_uuid, lflow_uuid_idx);
> +    build_lswitch_lb_hairpin(lflows, has_lb_vips, lflow_uuid, lflow_uuid_idx);
> +
> +    /* Add a 34000 priority flow to advance the DNS reply from ovn-controller,
> +     * if the CMS has configured DNS records for the datapath.
> +     */
> +    if (has_dp_dns_records(dp)) {
> +        const char *actions = has_stateful ? "ct_commit; next;" : "next;";
> +        ovn_ctrl_lflow_add_uuid(
> +            lflows, S_SWITCH_OUT_ACL, 34000, "udp.src == 53",
> +            actions, lflow_uuid, lflow_uuid_idx);
> +    }
> +
> +#if 0
> +    /* Add a 34000 priority flow to advance the service monitor reply
> +     * packets to skip applying ingress ACLs. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_IN_ACL, 34000,
> +                  "eth.dst == $svc_monitor_mac", "next;");
> +
> +    /* Add a 34000 priority flow to advance the service monitor packets
> +     * generated by ovn-controller to skip applying egress ACLs. */
> +    ovn_ctrl_lflow_add(lflows, S_SWITCH_OUT_ACL, 34000,
> +                       "eth.src == $svc_monitor_mac", "next;");
> +#endif
> +}
> +
> +static void
> +build_lswitch_dns_lkup(struct hmap *lflows,
> +                       const struct uuid *lflow_uuid,
> +                       uint32_t *lflow_uuid_idx)
> +{
> +    ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_DNS_LOOKUP, 100,
> +                            "udp.dst == 53",
> +                            REGBIT_DNS_LOOKUP_RESULT" = dns_lookup(); next;",
> +                            lflow_uuid, lflow_uuid_idx);
> +    const char *dns_action =
> +        "eth.dst <-> eth.src; ip4.src <-> ip4.dst; "
> +        "udp.dst = udp.src; udp.src = 53; outport = inport; "
> +        "flags.loopback = 1; output;";
> +    const char *dns_match = "udp.dst == 53 && "REGBIT_DNS_LOOKUP_RESULT;
> +    ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_DNS_RESPONSE, 100,
> +                            dns_match, dns_action,
> +                            lflow_uuid, lflow_uuid_idx);
> +    dns_action = "eth.dst <-> eth.src; ip6.src <-> ip6.dst; "
> +                 "udp.dst = udp.src; udp.src = 53; outport = inport; "
> +                 "flags.loopback = 1; output;";
> +    ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_DNS_RESPONSE, 100,
> +                            dns_match, dns_action,
> +                            lflow_uuid, lflow_uuid_idx);
> +}
> +
> +static void
> +build_lswitch_dp_lflows(struct hmap *lflows,
> +                        struct local_datapath *ldp,
> +                        bool use_ct_inv_match)
> +{
> +    uint32_t lflow_uuid_idx = 1;
> +
> +    /* Logical VLANs not supported. */
> +    if (!is_dp_vlan_transparent(ldp->datapath)) {
> +        /* Block logical VLANs. */
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PORT_SEC_L2, 100,
> +                                "vlan.present", "drop;",
> +                                &ldp->datapath->header_.uuid,
> +                                &lflow_uuid_idx);
> +    }
> +
> +    build_lswitch_pre_acls_and_acls(lflows, ldp->datapath, use_ct_inv_match,
> +                                    &ldp->datapath->header_.uuid,
> +                                    &lflow_uuid_idx);
> +
> +    if (has_dp_dns_records(ldp->datapath)) {
> +        build_lswitch_dns_lkup(lflows, &ldp->datapath->header_.uuid,
> +                               &lflow_uuid_idx);
> +    }
> +
> +    if (has_dp_unknown_lports(ldp->datapath)) {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_L2_LKUP, 0, "1",
> +                                "outport = \""MC_UNKNOWN"\"; output;",
> +                                 &ldp->datapath->header_.uuid,
> +                                 &lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_generic_lr_lookup(struct hmap *lflows)
> +{
> +    /* For other packet types, we can skip neighbor learning.
> +         * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 0, "1",
> +                          REGBIT_LOOKUP_NEIGHBOR_RESULT" = 1; next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                          "arp", "put_arp(inport, arp.spa, arp.sha); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                          "nd_na", "put_nd(inport, nd.target, nd.tll); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                          "nd_ns", "put_nd(inport, ip6.src, nd.sll); next;");
> +}
> +
> +static void
> +build_generic_lr_ip_input(struct hmap *lflows)
> +{
> +    /* L3 admission control: drop multicast and broadcast source, localhost
> +        * source or destination, and zero network source or destination
> +        * (priority 100). */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 100,
> +                          "ip4.src_mcast ||"
> +                          "ip4.src == 255.255.255.255 || "
> +                          "ip4.src == 127.0.0.0/8 || "
> +                          "ip4.dst == 127.0.0.0/8 || "
> +                          "ip4.src == 0.0.0.0/8 || "
> +                          "ip4.dst == 0.0.0.0/8",
> +                          "drop;");
> +
> +    /* Drop ARP packets (priority 85). ARP request packets for router's own
> +        * IPs are handled with priority-90 flows.
> +        * Drop IPv6 ND packets (priority 85). ND NA packets for router's own
> +        * IPs are handled with priority-90 flows.
> +        */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 85,
> +                          "arp || nd", "drop;");
> +
> +    /* Allow IPv6 multicast traffic that's supposed to reach the
> +        * router pipeline (e.g., router solicitations).
> +        */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 84, "nd_rs || nd_ra",
> +                          "next;");
> +
> +    /* Drop other reserved multicast. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 83,
> +                          "ip6.mcast_rsvd", "drop;");
> +
> +    /* Drop Ethernet local broadcast.  By definition this traffic should
> +        * not be forwarded.*/
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 50,
> +                       "eth.bcast", "drop;");
> +
> +    /* TTL discard */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 30,
> +                       "ip4 && ip.ttl == {0, 1}", "drop;");
> +
> +    /* Pass other traffic not already handled to the next table for
> +        * routing. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 0, "1", "next;");
> +}
> +
> +static void
> +build_generic_lr_arp_resolve(struct hmap *lflows)
> +{
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_RESOLVE, 500,
> +                          "ip4.mcast || ip6.mcast", "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_RESOLVE, 0, "ip4",
> +                          "get_arp(outport, " REG_NEXT_HOP_IPV4 "); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_RESOLVE, 0, "ip6",
> +                          "get_nd(outport, " REG_NEXT_HOP_IPV6 "); next;");
> +}
> +
> +static void
> +build_generic_lr_arp_request(struct hmap *lflows)
> +{
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_REQUEST, 100,
> +                          "eth.dst == 00:00:00:00:00:00 && ip4",
> +                          "arp { "
> +                          "eth.dst = ff:ff:ff:ff:ff:ff; "
> +                          "arp.spa = " REG_SRC_IPV4 "; "
> +                          "arp.tpa = " REG_NEXT_HOP_IPV4 "; "
> +                          "arp.op = 1; " /* ARP request */
> +                          "output; "
> +                          "};");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_REQUEST, 100,
> +                          "eth.dst == 00:00:00:00:00:00 && ip6",
> +                          "nd_ns { "
> +                          "nd.target = " REG_NEXT_HOP_IPV6 "; "
> +                          "output; "
> +                          "};");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ARP_REQUEST, 0, "1", "output;");
> +}
> +
> +void
> +build_lrouter_generic_lflows(struct hmap *lflows)
> +{
> +    /* Logical VLANs not supported.
> +         * Broadcast/multicast source address is invalid. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ADMISSION, 100,
> +                          "vlan.present || eth.src[40]", "drop;");
> +
> +    build_generic_lr_lookup(lflows);
> +    build_generic_lr_ip_input(lflows);
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_DEFRAG, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_UNSNAT, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_DNAT, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ECMP_STATEFUL, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_ND_RA_RESPONSE, 0, "1", "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_ROUTING, 550,
> +                       "nd_rs || nd_ra", "drop;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_ROUTING_ECMP, 150,
> +                       REG_ECMP_GROUP_ID" == 0", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_POLICY, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_POLICY_ECMP, 150,
> +                       REG_ECMP_GROUP_ID" == 0", "next;");
> +
> +    build_generic_lr_arp_resolve(lflows);
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_CHK_PKT_LEN, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LARGER_PKTS, 0, "1", "next;");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_GW_REDIRECT, 0, "1", "next;");
> +
> +    build_generic_lr_arp_request(lflows);
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_OUT_UNDNAT, 0, "1", "next;");
> +
> +    /* Send the IPv6 NS packets to next table. When ovn-controller
> +     * generates IPv6 NS (for the action - nd_ns{}), the injected
> +     * packet would go through conntrack - which is not required. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_OUT_SNAT, 120, "nd_ns", "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_OUT_SNAT, 0, "1", "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_OUT_EGR_LOOP, 0, "1", "next;");
> +}
> +
> +static bool
> +is_learn_from_arp_request(const struct sbrec_datapath_binding *dp)
> +{
> +    return (!datapath_is_switch(dp) &&
> +            smap_get_bool(&dp->options,
> +                          "always-learn-from-arp-request", true));
> +
> +}
> +
> +static void build_lrouter_neigh_learning_flows(
> +    struct hmap *lflows, const struct sbrec_datapath_binding *dp);
> +static void build_misc_local_traffic_drop_flows_for_lrouter(
> +    struct hmap *lflows, const struct sbrec_datapath_binding *dp);
> +
> +static void
> +build_lrouter_dp_lflows(struct hmap *lflows,
> +                        const struct sbrec_datapath_binding *dp)
> +{
> +    build_lrouter_neigh_learning_flows(lflows, dp);
> +    build_misc_local_traffic_drop_flows_for_lrouter(lflows, dp);
> +}
> +
> +static void
> +build_lrouter_neigh_learning_flows(struct hmap *lflows,
> +                                   const struct sbrec_datapath_binding *dp)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    bool learn_from_arp_request = is_learn_from_arp_request(dp);
> +
> +    ds_clear(&actions);
> +    ds_put_format(&actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> +                  " = lookup_arp(inport, arp.spa, arp.sha); %snext;",
> +                  learn_from_arp_request ? "" :
> +                  REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100,
> +                       "arp.op == 2", ds_cstr(&actions));
> +
> +    ds_clear(&actions);
> +    ds_put_format(&actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> +                  " = lookup_nd(inport, nd.target, nd.tll); %snext;",
> +                  learn_from_arp_request ? "" :
> +                  REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "nd_na",
> +                       ds_cstr(&actions));
> +
> +    ds_clear(&actions);
> +    ds_put_format(&actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> +                  " = lookup_nd(inport, ip6.src, nd.sll); %snext;",
> +                  learn_from_arp_request ? "" :
> +                  REGBIT_LOOKUP_NEIGHBOR_IP_RESULT
> +                  " = lookup_nd_ip(inport, ip6.src); ");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "nd_ns",
> +                    ds_cstr(&actions));
> +
> +    /* For other packet types, we can skip neighbor learning.
> +        * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 0, "1",
> +                    REGBIT_LOOKUP_NEIGHBOR_RESULT" = 1; next;");
> +
> +    /* Flows for LEARN_NEIGHBOR. */
> +    /* Skip Neighbor learning if not required. */
> +    ds_clear(&match);
> +    ds_put_format(&match, REGBIT_LOOKUP_NEIGHBOR_RESULT" == 1%s",
> +                  learn_from_arp_request ? "" :
> +                  " || "REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" == 0");
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 100,
> +                       ds_cstr(&match), "next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                       "arp", "put_arp(inport, arp.spa, arp.sha); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                       "nd_na", "put_nd(inport, nd.target, nd.tll); next;");
> +
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> +                       "nd_ns", "put_nd(inport, ip6.src, nd.sll); next;");
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
> +static void
> +build_misc_local_traffic_drop_flows_for_lrouter(
> +    struct hmap *lflows,
> +    const struct sbrec_datapath_binding *dp)
> +{
> +    bool mcast_relay = smap_get_bool(&dp->options, "mcast-relay", false);
> +    /* Allow other multicast if relay enabled (priority 82). */
> +    ovn_ctrl_lflow_add(lflows, S_ROUTER_IN_IP_INPUT, 82,
> +                      "ip4.mcast || ip6.mcast",
> +                       mcast_relay ? "next;" : "drop;");
> +}
> +
> +static bool
> +lsp_is_enabled(const struct sbrec_port_binding *pb)
> +{
> +    return smap_get_bool(&pb->options, "enabled", true);
> +}
> +
> +static bool
> +lsp_is_up(const struct sbrec_port_binding *pb)
> +{
> +    return pb->n_up && *pb->up;
> +}
> +
> +static void build_lswitch_input_port_sec_op(struct hmap *lflows,
> +                                            struct local_lport *,
> +                                            uint32_t *lflow_uuid_idx);
> +static void build_lswitch_output_port_sec_op(struct hmap *lflows,
> +                                             struct local_lport *,
> +                                             uint32_t *lflow_uuid_idx);
> +static void build_lswitch_learn_fdb_op(struct hmap *lflows,
> +                                       struct local_lport *,
> +                                       uint32_t *lflow_uuid_idx,
> +                                       struct ds *match,
> +                                       struct ds *actions);
> +static void build_lswitch_skip_conntrack_flows_op(struct hmap *lflows,
> +                                                  struct local_lport *,
> +                                                  uint32_t *lflow_uuid_idx,
> +                                                  struct ds *match);
> +static void build_lswitch_arp_nd_responder_skip_local(struct hmap *lflows,
> +                                                      struct local_lport *,
> +                                                      uint32_t *lflow_uuid_idx,
> +                                                      struct ds *match);
> +static void build_lswitch_arp_nd_responder_known_ips(struct hmap *lflows,
> +                                                     struct local_lport *,
> +                                                     uint32_t *lflow_uuid_idx,
> +                                                     struct ds *match,
> +                                                     struct ds *actions);
> +static void build_lswitch_ip_unicast_lookup(struct hmap *lflows,
> +                                            struct local_lport *,
> +                                            uint32_t *lflow_uuid_idx,
> +                                            struct ds *match,
> +                                            struct ds *actions);
> +
> +static void build_arp_resolve_flows_for_lsp_in_router(
> +    struct hmap *lflow, struct local_lport *, uint32_t *lflow_uuid_idx,
> +    struct ds *match, struct ds *actions);
> +
> +
> +static bool is_ip4_in_router_network(struct local_lport *, ovs_be32 ip);
> +static bool is_ip6_in_router_network(struct local_lport *, struct in6_addr);
> +static void op_put_v4_networks(struct ds *ds, const struct local_lport *op,
> +                               bool add_bcast);
> +static void op_put_v6_networks(struct ds *ds, const struct local_lport *op);
> +
> +static void
> +build_lswitch_port_lflows(struct hmap *lflows, struct local_lport *op)
> +{
> +    uint32_t lflow_uuid_idx = 1;
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    build_lswitch_input_port_sec_op(lflows, op, &lflow_uuid_idx);
> +    build_lswitch_output_port_sec_op(lflows, op, &lflow_uuid_idx);
> +    build_lswitch_learn_fdb_op(lflows, op, &lflow_uuid_idx,
> +                               &match, &actions);
> +    build_lswitch_skip_conntrack_flows_op(lflows, op, &lflow_uuid_idx, &match);
> +    build_lswitch_arp_nd_responder_skip_local(lflows, op,
> +                                              &lflow_uuid_idx, &match);
> +    build_lswitch_arp_nd_responder_known_ips(lflows, op,
> +                                             &lflow_uuid_idx, &match,
> +                                             &actions);
> +    build_lswitch_ip_unicast_lookup(lflows, op, &lflow_uuid_idx,
> +                                    &match, &actions);
> +
> +    build_arp_resolve_flows_for_lsp_in_router(lflows, op, &lflow_uuid_idx,
> +                                              &match, &actions);
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
> +/* Appends port security constraints on L2 address field 'eth_addr_field'
> + * (e.g. "eth.src" or "eth.dst") to 'match'.  'ps_addrs', with 'n_ps_addrs'
> + * elements, is the collection of port_security constraints from an
> + * OVN_NB Logical_Switch_Port row generated by extract_lsp_addresses(). */
> +static void
> +build_port_security_l2(const char *eth_addr_field,
> +                       struct lport_addresses *ps_addrs,
> +                       unsigned int n_ps_addrs,
> +                       struct ds *match)
> +{
> +    if (!n_ps_addrs) {
> +        return;
> +    }
> +
> +    ds_put_format(match, " && %s == {", eth_addr_field);
> +
> +    for (size_t i = 0; i < n_ps_addrs; i++) {
> +        ds_put_format(match, "%s ", ps_addrs[i].ea_s);
> +    }
> +    ds_chomp(match, ' ');
> +    ds_put_cstr(match, "}");
> +}
> +
> +static void
> +build_port_security_ipv6_flow(
> +    enum ovn_pipeline pipeline, struct ds *match, struct eth_addr ea,
> +    struct ipv6_netaddr *ipv6_addrs, int n_ipv6_addrs)
> +{
> +    char ip6_str[INET6_ADDRSTRLEN + 1];
> +
> +    ds_put_format(match, " && %s == {",
> +                  pipeline == P_IN ? "ip6.src" : "ip6.dst");
> +
> +    /* Allow link-local address. */
> +    struct in6_addr lla;
> +    in6_generate_lla(ea, &lla);
> +    ipv6_string_mapped(ip6_str, &lla);
> +    ds_put_format(match, "%s, ", ip6_str);
> +
> +    /* Allow ip6.dst=ff00::/8 for multicast packets */
> +    if (pipeline == P_OUT) {
> +        ds_put_cstr(match, "ff00::/8, ");
> +    }
> +    for (size_t i = 0; i < n_ipv6_addrs; i++) {
> +        /* When the netmask is applied, if the host portion is
> +         * non-zero, the host can only use the specified
> +         * address.  If zero, the host is allowed to use any
> +         * address in the subnet.
> +         */
> +        if (ipv6_addrs[i].plen == 128
> +            || !ipv6_addr_is_host_zero(&ipv6_addrs[i].addr,
> +                                       &ipv6_addrs[i].mask)) {
> +            ds_put_format(match, "%s, ", ipv6_addrs[i].addr_s);
> +        } else {
> +            ds_put_format(match, "%s/%d, ", ipv6_addrs[i].network_s,
> +                          ipv6_addrs[i].plen);
> +        }
> +    }
> +    /* Replace ", " by "}". */
> +    ds_chomp(match, ' ');
> +    ds_chomp(match, ',');
> +    ds_put_cstr(match, "}");
> +}
> +
> +static void
> +build_port_security_ipv6_nd_flow(
> +    struct ds *match, struct eth_addr ea, struct ipv6_netaddr *ipv6_addrs,
> +    int n_ipv6_addrs)
> +{
> +    ds_put_format(match, " && ip6 && nd && ((nd.sll == "ETH_ADDR_FMT" || "
> +                  "nd.sll == "ETH_ADDR_FMT") || ((nd.tll == "ETH_ADDR_FMT" || "
> +                  "nd.tll == "ETH_ADDR_FMT")", ETH_ADDR_ARGS(eth_addr_zero),
> +                  ETH_ADDR_ARGS(ea), ETH_ADDR_ARGS(eth_addr_zero),
> +                  ETH_ADDR_ARGS(ea));
> +    if (!n_ipv6_addrs) {
> +        ds_put_cstr(match, "))");
> +        return;
> +    }
> +
> +    char ip6_str[INET6_ADDRSTRLEN + 1];
> +    struct in6_addr lla;
> +    in6_generate_lla(ea, &lla);
> +    memset(ip6_str, 0, sizeof(ip6_str));
> +    ipv6_string_mapped(ip6_str, &lla);
> +    ds_put_format(match, " && (nd.target == %s", ip6_str);
> +
> +    for (size_t i = 0; i < n_ipv6_addrs; i++) {
> +        /* When the netmask is applied, if the host portion is
> +         * non-zero, the host can only use the specified
> +         * address in the nd.target.  If zero, the host is allowed
> +         * to use any address in the subnet.
> +         */
> +        if (ipv6_addrs[i].plen == 128
> +            || !ipv6_addr_is_host_zero(&ipv6_addrs[i].addr,
> +                                       &ipv6_addrs[i].mask)) {
> +            ds_put_format(match, " || nd.target == %s", ipv6_addrs[i].addr_s);
> +        } else {
> +            ds_put_format(match, " || nd.target == %s/%d",
> +                          ipv6_addrs[i].network_s, ipv6_addrs[i].plen);
> +        }
> +    }
> +
> +    ds_put_format(match, ")))");
> +}
> +
> +/**
> + * Build port security constraints on IPv4 and IPv6 src and dst fields
> + * and add logical flows to S_SWITCH_(IN/OUT)_PORT_SEC_IP stage.
> + *
> + * For each port security of the logical port, following
> + * logical flows are added
> + *   - If the port security has IPv4 addresses,
> + *     - Priority 90 flow to allow IPv4 packets for known IPv4 addresses
> + *
> + *   - If the port security has IPv6 addresses,
> + *     - Priority 90 flow to allow IPv6 packets for known IPv6 addresses
> + *
> + *   - If the port security has IPv4 addresses or IPv6 addresses or both
> + *     - Priority 80 flow to drop all IPv4 and IPv6 traffic
> + */
> +static void
> +build_port_security_ip(enum ovn_pipeline pipeline, struct local_lport *op,
> +                       struct hmap *lflows, uint32_t *lflow_uuid_idx)
> +{
> +    char *port_direction;
> +    enum ovn_stage stage;
> +    if (pipeline == P_IN) {
> +        port_direction = "inport";
> +        stage = S_SWITCH_IN_PORT_SEC_IP;
> +    } else {
> +        port_direction = "outport";
> +        stage = S_SWITCH_OUT_PORT_SEC_IP;
> +    }
> +
> +    for (size_t i = 0; i < op->lsp.n_ps_addrs; i++) {
> +        struct lport_addresses *ps = &op->lsp.ps_addrs[i];
> +
> +        if (!(ps->n_ipv4_addrs || ps->n_ipv6_addrs)) {
> +            continue;
> +        }
> +
> +        if (ps->n_ipv4_addrs) {
> +            struct ds match = DS_EMPTY_INITIALIZER;
> +            if (pipeline == P_IN) {
> +                /* Permit use of the unspecified address for DHCP discovery */
> +                struct ds dhcp_match = DS_EMPTY_INITIALIZER;
> +                ds_put_format(&dhcp_match, "inport == %s"
> +                              " && eth.src == %s"
> +                              " && ip4.src == 0.0.0.0"
> +                              " && ip4.dst == 255.255.255.255"
> +                              " && udp.src == 68 && udp.dst == 67",
> +                              op->json_key, ps->ea_s);
> +                ovn_ctrl_lflow_add_uuid(lflows, stage, 90,
> +                                        ds_cstr(&dhcp_match), "next;",
> +                                        &op->pb->header_.uuid, lflow_uuid_idx);
> +                ds_destroy(&dhcp_match);
> +                ds_put_format(&match, "inport == %s && eth.src == %s"
> +                              " && ip4.src == {", op->json_key,
> +                              ps->ea_s);
> +            } else {
> +                ds_put_format(&match, "outport == %s && eth.dst == %s"
> +                              " && ip4.dst == {255.255.255.255, 224.0.0.0/4, ",
> +                              op->json_key, ps->ea_s);
> +            }
> +
> +            for (int j = 0; j < ps->n_ipv4_addrs; j++) {
> +                ovs_be32 mask = ps->ipv4_addrs[j].mask;
> +                /* When the netmask is applied, if the host portion is
> +                 * non-zero, the host can only use the specified
> +                 * address.  If zero, the host is allowed to use any
> +                 * address in the subnet.
> +                 */
> +                if (ps->ipv4_addrs[j].plen == 32
> +                    || ps->ipv4_addrs[j].addr & ~mask) {
> +                    ds_put_format(&match, "%s", ps->ipv4_addrs[j].addr_s);
> +                    if (pipeline == P_OUT && ps->ipv4_addrs[j].plen != 32) {
> +                        /* Host is also allowed to receive packets to the
> +                         * broadcast address in the specified subnet. */
> +                        ds_put_format(&match, ", %s",
> +                                      ps->ipv4_addrs[j].bcast_s);
> +                    }
> +                } else {
> +                    /* host portion is zero */
> +                    ds_put_format(&match, "%s/%d", ps->ipv4_addrs[j].network_s,
> +                                  ps->ipv4_addrs[j].plen);
> +                }
> +                ds_put_cstr(&match, ", ");
> +            }
> +
> +            /* Replace ", " by "}". */
> +            ds_chomp(&match, ' ');
> +            ds_chomp(&match, ',');
> +            ds_put_cstr(&match, "}");
> +            ovn_ctrl_lflow_add_uuid(lflows, stage, 90, ds_cstr(&match),
> +                                    "next;", &op->pb->header_.uuid,
> +                                    lflow_uuid_idx);
> +            ds_destroy(&match);
> +        }
> +
> +        if (ps->n_ipv6_addrs) {
> +            struct ds match = DS_EMPTY_INITIALIZER;
> +            if (pipeline == P_IN) {
> +                /* Permit use of unspecified address for duplicate address
> +                 * detection */
> +                struct ds dad_match = DS_EMPTY_INITIALIZER;
> +                ds_put_format(&dad_match, "inport == %s"
> +                              " && eth.src == %s"
> +                              " && ip6.src == ::"
> +                              " && ip6.dst == ff02::/16"
> +                              " && icmp6.type == {131, 135, 143}",
> +                              op->json_key,
> +                              ps->ea_s);
> +                ovn_ctrl_lflow_add_uuid(lflows, stage, 90, ds_cstr(&dad_match),
> +                                   "next;", &op->pb->header_.uuid,
> +                                   lflow_uuid_idx);
> +                ds_destroy(&dad_match);
> +            }
> +            ds_put_format(&match, "%s == %s && %s == %s",
> +                          port_direction, op->json_key,
> +                          pipeline == P_IN ? "eth.src" : "eth.dst", ps->ea_s);
> +            build_port_security_ipv6_flow(pipeline, &match, ps->ea,
> +                                          ps->ipv6_addrs, ps->n_ipv6_addrs);
> +            ovn_ctrl_lflow_add_uuid(lflows, stage, 90, ds_cstr(&match),
> +                                    "next;", &op->pb->header_.uuid,
> +                                    lflow_uuid_idx);
> +            ds_destroy(&match);
> +        }
> +
> +        char *match = xasprintf("%s == %s && %s == %s && ip",
> +                                port_direction, op->json_key,
> +                                pipeline == P_IN ? "eth.src" : "eth.dst",
> +                                ps->ea_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, stage, 80, match, "drop;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        free(match);
> +    }
> +
> +}
> +
> +/**
> + * Build port security constraints on ARP and IPv6 ND fields
> + * and add logical flows to S_SWITCH_IN_PORT_SEC_ND stage.
> + *
> + * For each port security of the logical port, following
> + * logical flows are added
> + *   - If the port security has no IP (both IPv4 and IPv6) or
> + *     if it has IPv4 address(es)
> + *      - Priority 90 flow to allow ARP packets for known MAC addresses
> + *        in the eth.src and arp.spa fields. If the port security
> + *        has IPv4 addresses, allow known IPv4 addresses in the arp.tpa field.
> + *
> + *   - If the port security has no IP (both IPv4 and IPv6) or
> + *     if it has IPv6 address(es)
> + *     - Priority 90 flow to allow IPv6 ND packets for known MAC addresses
> + *       in the eth.src and nd.sll/nd.tll fields. If the port security
> + *       has IPv6 addresses, allow known IPv6 addresses in the nd.target field
> + *       for IPv6 Neighbor Advertisement packet.
> + *
> + *   - Priority 80 flow to drop ARP and IPv6 ND packets.
> + */
> +static void
> +build_port_security_nd(struct local_lport *op, struct hmap *lflows,
> +                       uint32_t *lflow_uuid_idx)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +
> +    for (size_t i = 0; i < op->lsp.n_ps_addrs; i++) {
> +        struct lport_addresses *ps = &op->lsp.ps_addrs[i];
> +
> +        bool no_ip = !(ps->n_ipv4_addrs || ps->n_ipv6_addrs);
> +
> +        ds_clear(&match);
> +        if (ps->n_ipv4_addrs || no_ip) {
> +            ds_put_format(&match,
> +                          "inport == %s && eth.src == %s && arp.sha == %s",
> +                          op->json_key, ps->ea_s, ps->ea_s);
> +
> +            if (ps->n_ipv4_addrs) {
> +                ds_put_cstr(&match, " && arp.spa == {");
> +                for (size_t j = 0; j < ps->n_ipv4_addrs; j++) {
> +                    /* When the netmask is applied, if the host portion is
> +                     * non-zero, the host can only use the specified
> +                     * address in the arp.spa.  If zero, the host is allowed
> +                     * to use any address in the subnet. */
> +                    if (ps->ipv4_addrs[j].plen == 32
> +                        || ps->ipv4_addrs[j].addr & ~ps->ipv4_addrs[j].mask) {
> +                        ds_put_cstr(&match, ps->ipv4_addrs[j].addr_s);
> +                    } else {
> +                        ds_put_format(&match, "%s/%d",
> +                                      ps->ipv4_addrs[j].network_s,
> +                                      ps->ipv4_addrs[j].plen);
> +                    }
> +                    ds_put_cstr(&match, ", ");
> +                }
> +                ds_chomp(&match, ' ');
> +                ds_chomp(&match, ',');
> +                ds_put_cstr(&match, "}");
> +            }
> +            ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PORT_SEC_ND,
> +                                    90, ds_cstr(&match), "next;",
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +
> +        if (ps->n_ipv6_addrs || no_ip) {
> +            ds_clear(&match);
> +            ds_put_format(&match, "inport == %s && eth.src == %s",
> +                          op->json_key, ps->ea_s);
> +            build_port_security_ipv6_nd_flow(&match, ps->ea, ps->ipv6_addrs,
> +                                             ps->n_ipv6_addrs);
> +            ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PORT_SEC_ND, 90,
> +                                    ds_cstr(&match), "next;",
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +
> +    ds_clear(&match);
> +    ds_put_format(&match, "inport == %s && (arp || nd)", op->json_key);
> +    ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PORT_SEC_ND, 80,
> +                            ds_cstr(&match), "drop;", &op->pb->header_.uuid,
> +                            lflow_uuid_idx);
> +    ds_destroy(&match);
> +}
> +
> +/* Logical switch ingress table 0: Ingress port security - L2
> + *  (priority 50).
> + *  Ingress table 1: Ingress port security - IP (priority 90 and 80)
> + *  Ingress table 2: Ingress port security - ND (priority 90 and 80)
> + */
> +static void
> +build_lswitch_input_port_sec_op(struct hmap *lflows, struct local_lport *op,
> +                                uint32_t *lflow_uuid_idx)
> +{
> +    if (op->type == LP_EXTERNAL) {
> +        return;
> +    }
> +
> +    if (!lsp_is_enabled(op->pb)) {
> +        /* Drop packets from disabled logical ports (since logical flow
> +         * tables are default-drop). */
> +        return;
> +    }
> +
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    ds_put_format(&match, "inport == %s", op->json_key);
> +    build_port_security_l2("eth.src", op->lsp.ps_addrs, op->lsp.n_ps_addrs,
> +                            &match);
> +
> +    const char *queue_id = smap_get(&op->pb->options, "qdisc_queue_id");
> +    if (queue_id) {
> +        ds_put_format(&actions, "set_queue(%s); ", queue_id);
> +    }
> +    ds_put_cstr(&actions, "next;");
> +    ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PORT_SEC_L2, 50,
> +                            ds_cstr(&match), ds_cstr(&actions),
> +                            &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +
> +    if (op->lsp.n_ps_addrs) {
> +        build_port_security_ip(P_IN, op, lflows, lflow_uuid_idx);
> +        build_port_security_nd(op, lflows, lflow_uuid_idx);
> +    }
> +}
> +
> +/* Egress table 8: Egress port security - IP (priorities 90 and 80)
> + * if port security enabled.
> + *
> + * Egress table 9: Egress port security - L2 (priorities 50 and 150).
> + *
> + * Priority 50 rules implement port security for enabled logical port.
> + *
> + * Priority 150 rules drop packets to disabled logical ports, so that
> + * they don't even receive multicast or broadcast packets.
> + */
> +static void
> +build_lswitch_output_port_sec_op(struct hmap *lflows, struct local_lport *op,
> +                                 uint32_t *lflow_uuid_idx)
> +{
> +    if (op->type == LP_EXTERNAL) {
> +        return;
> +    }
> +
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +
> +    ds_put_format(&match, "outport == %s", op->json_key);
> +    if (lsp_is_enabled(op->pb)) {
> +        struct ds actions = DS_EMPTY_INITIALIZER;
> +        build_port_security_l2("eth.dst", op->lsp.ps_addrs, op->lsp.n_ps_addrs,
> +                                &match);
> +
> +        if (op->type == LP_LOCALNET) {
> +            const char *queue_id = smap_get(&op->pb->options,
> +                                            "qdisc_queue_id");
> +            if (queue_id) {
> +                ds_put_format(&actions, "set_queue(%s); ", queue_id);
> +            }
> +        }
> +        ds_put_cstr(&actions, "output;");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_PORT_SEC_L2,
> +                                50, ds_cstr(&match), ds_cstr(&actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        ds_destroy(&actions);
> +    } else {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_OUT_PORT_SEC_L2,
> +                                150, ds_cstr(&match), "drop;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    ds_destroy(&match);
> +
> +    if (op->lsp.n_ps_addrs) {
> +        build_port_security_ip(P_OUT, op, lflows, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_learn_fdb_op(struct hmap *lflows, struct local_lport *op,
> +                           uint32_t *lflow_uuid_idx, struct ds *match,
> +                           struct ds *actions)
> +{
> +    if (!op->lsp.n_ps_addrs && op->type == LP_VIF &&
> +            op->lsp.has_unknown) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +        ds_put_format(match, "inport == %s", op->json_key);
> +        ds_put_format(actions, REGBIT_LKUP_FDB
> +                      " = lookup_fdb(inport, eth.src); next;");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_LOOKUP_FDB, 100,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +        ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
> +        ds_clear(actions);
> +        ds_put_cstr(actions, "put_fdb(inport, eth.src); next;");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_PUT_FDB, 100,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_lswitch_skip_conntrack_flows_op(struct hmap *lflows,
> +                                      struct local_lport *op,
> +                                      uint32_t *lflow_uuid_idx,
> +                                      struct ds *match)
> +{
> +    if (!op->peer) {
> +        return;
> +    }
> +
> +    bool has_stateful = (has_dp_stateful_acls(op->ldp->datapath) ||
> +                         has_dp_lb_vip(op->ldp->datapath));
> +    if (has_stateful) {
> +        skip_lport_from_conntrack(lflows, op, lflow_uuid_idx,
> +                                  S_SWITCH_IN_PRE_LB, S_SWITCH_OUT_PRE_LB,
> +                                  110, match);
> +        skip_lport_from_conntrack(lflows, op, lflow_uuid_idx,
> +                                  S_SWITCH_IN_PRE_ACL, S_SWITCH_OUT_PRE_ACL,
> +                                  110, match);
> +    }
> +}
> +
> +/* Ingress table 13: ARP/ND responder, skip requests coming from localnet
> + * and vtep ports. (priority 100); see ovn-northd.8.xml for the
> + * rationale. */
> +
> +static void
> +build_lswitch_arp_nd_responder_skip_local(struct hmap *lflows,
> +                                          struct local_lport *op,
> +                                          uint32_t *lflow_uuid_idx,
> +                                          struct ds *match)
> +{
> +    if (op->type == LP_LOCALNET || op->type == LP_VTEP) {
> +        ds_clear(match);
> +        ds_put_format(match, "inport == %s", op->json_key);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 100,
> +                                ds_cstr(match), "next;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +/* Ingress table 13: ARP/ND responder, reply for known IPs.
> + * (priority 50). */
> +static void
> +build_lswitch_arp_nd_responder_known_ips(struct hmap *lflows,
> +                                         struct local_lport *op,
> +                                         uint32_t *lflow_uuid_idx,
> +                                         struct ds *match,
> +                                         struct ds *actions)
> +{
> +    if (op->type == LP_VIRTUAL) {
> +        /* Handle
> +            *  - GARPs for virtual ip which belongs to a logical port
> +            *    of type 'virtual' and bind that port.
> +            *
> +            *  - ARP reply from the virtual ip which belongs to a logical
> +            *    port of type 'virtual' and bind that port.
> +            * */
> +        ovs_be32 ip;
> +        const char *virtual_ip = smap_get(&op->pb->options,
> +                                          "virtual-ip");
> +        const char *virtual_parents = smap_get(&op->pb->options,
> +                                               "virtual-parents");
> +        if (!virtual_ip || !virtual_parents ||
> +            !ip_parse(virtual_ip, &ip)) {
> +            return;
> +        }
> +
> +        char *tokstr = xstrdup(virtual_parents);
> +        char *save_ptr = NULL;
> +        char *vparent;
> +        for (vparent = strtok_r(tokstr, ",", &save_ptr); vparent != NULL;
> +                vparent = strtok_r(NULL, ",", &save_ptr)) {
> +            ds_clear(match);
> +            ds_put_format(match, "inport == \"%s\" && "
> +                          "((arp.op == 1 && arp.spa == %s && "
> +                          "arp.tpa == %s) || (arp.op == 2 && "
> +                          "arp.spa == %s))",
> +                          vparent, virtual_ip, virtual_ip,
> +                          virtual_ip);
> +            ds_clear(actions);
> +            ds_put_format(actions,
> +                "bind_vport(%s, inport); "
> +                "next;",
> +                op->json_key);
> +            ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 100,
> +                                    ds_cstr(match), ds_cstr(actions),
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +
> +        free(tokstr);
> +    } else {
> +        /*
> +         * Add ARP/ND reply flows if either the
> +         *  - port is up and it doesn't have 'unknown' address defined or
> +         *  - port type is router or
> +         *  - port type is localport
> +         */
> +        if (op->lsp.check_lport_is_up &&
> +            !lsp_is_up(op->pb) && op->type != LP_PATCH &&
> +            op->type != LP_LOCALPORT) {
> +            return;
> +        }
> +
> +        if (op->type == LP_EXTERNAL || op->lsp.has_unknown) {
> +            return;
> +        }
> +
> +        for (size_t i = 0; i < op->lsp.n_addrs; i++) {
> +            for (size_t j = 0; j < op->lsp.addrs[i].n_ipv4_addrs; j++) {
> +                ds_clear(match);
> +                ds_put_format(match, "arp.tpa == %s && arp.op == 1",
> +                              op->lsp.addrs[i].ipv4_addrs[j].addr_s);
> +                ds_clear(actions);
> +                ds_put_format(actions,
> +                    "eth.dst = eth.src; "
> +                    "eth.src = %s; "
> +                    "arp.op = 2; /* ARP reply */ "
> +                    "arp.tha = arp.sha; "
> +                    "arp.sha = %s; "
> +                    "arp.tpa = arp.spa; "
> +                    "arp.spa = %s; "
> +                    "outport = inport; "
> +                    "flags.loopback = 1; "
> +                    "output;",
> +                    op->lsp.addrs[i].ea_s, op->lsp.addrs[i].ea_s,
> +                    op->lsp.addrs[i].ipv4_addrs[j].addr_s);
> +                ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                        ds_cstr(match), ds_cstr(actions),
> +                                        &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +                /* Do not reply to an ARP request from the port that owns
> +                    * the address (otherwise a DHCP client that ARPs to check
> +                    * for a duplicate address will fail).  Instead, forward
> +                    * it the usual way.
> +                    *
> +                    * (Another alternative would be to simply drop the packet.
> +                    * If everything is working as it is configured, then this
> +                    * would produce equivalent results, since no one should
> +                    * reply to the request.  But ARPing for one's own IP
> +                    * address is intended to detect situations where the
> +                    * network is not working as configured, so dropping the
> +                    * request would frustrate that intent.) */
> +                ds_put_format(match, " && inport == %s", op->json_key);
> +                ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 100,
> +                                        ds_cstr(match), "next;",
> +                                        &op->pb->header_.uuid, lflow_uuid_idx);
> +            }
> +
> +            /* For ND solicitations, we need to listen for both the
> +                * unicast IPv6 address and its all-nodes multicast address,
> +                * but always respond with the unicast IPv6 address. */
> +            for (size_t j = 0; j < op->lsp.addrs[i].n_ipv6_addrs; j++) {
> +                ds_clear(match);
> +                ds_put_format(
> +                    match,
> +                    "nd_ns && ip6.dst == {%s, %s} && nd.target == %s",
> +                    op->lsp.addrs[i].ipv6_addrs[j].addr_s,
> +                    op->lsp.addrs[i].ipv6_addrs[j].sn_addr_s,
> +                    op->lsp.addrs[i].ipv6_addrs[j].addr_s);
> +
> +                ds_clear(actions);
> +                ds_put_format(actions,
> +                        "%s { "
> +                        "eth.src = %s; "
> +                        "ip6.src = %s; "
> +                        "nd.target = %s; "
> +                        "nd.tll = %s; "
> +                        "outport = inport; "
> +                        "flags.loopback = 1; "
> +                        "output; "
> +                        "};",
> +                        op->type == LP_PATCH ? "nd_na_router" : "nd_na",
> +                        op->lsp.addrs[i].ea_s,
> +                        op->lsp.addrs[i].ipv6_addrs[j].addr_s,
> +                        op->lsp.addrs[i].ipv6_addrs[j].addr_s,
> +                        op->lsp.addrs[i].ea_s);
> +                ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 50,
> +                                        ds_cstr(match), ds_cstr(actions),
> +                                        &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +                /* Do not reply to a solicitation from the port that owns
> +                    * the address (otherwise DAD detection will fail). */
> +                ds_put_format(match, " && inport == %s", op->json_key);
> +                ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_ARP_ND_RSP, 100,
> +                                        ds_cstr(match), "next;",
> +                                        &op->pb->header_.uuid, lflow_uuid_idx);
> +            }
> +        }
> +    }
> +}
> +
> +/* Ingress table 19: Destination lookup, unicast handling (priority 50), */
> +static void
> +build_lswitch_ip_unicast_lookup(struct hmap *lflows, struct local_lport *op,
> +                                uint32_t *lflow_uuid_idx,
> +                                struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_EXTERNAL) {
> +        return;
> +    }
> +
> +    /* For ports connected to logical routers add flows to bypass the
> +     * broadcast flooding of ARP/ND requests in table 19. We direct the
> +     * requests only to the router port that owns the IP address.
> +     */
> +#if 0
> +    if (lsp_is_router(op->nbsp)) {
> +        build_lswitch_rport_arp_req_flows(op->peer, op->od, op, lflows,
> +                                            &op->nbsp->header_);
> +    }
> +#endif
> +
> +    for (size_t i = 0; i < op->lsp.n_addrs; i++) {
> +        ds_clear(match);
> +        ds_put_format(match, "eth.dst == %s", op->lsp.addrs[i].ea_s);
> +        ds_clear(actions);
> +        ds_put_format(actions, "outport = %s; output;", op->json_key);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_L2_LKUP, 50,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void build_arp_resolve_flows_for_lsp_in_router(
> +    struct hmap *lflows, struct local_lport *op, uint32_t *lflow_uuid_idx,
> +    struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_VIRTUAL) {
> +        const char *vip_s = smap_get(&op->pb->options,
> +                                    "virtual-ip");
> +        const char *virtual_parents = smap_get(&op->pb->options,
> +                                               "virtual-parents");
> +        ovs_be32 vip;
> +        if (!vip_s || !virtual_parents || !ip_parse(vip_s, &vip)) {
> +            return;
> +        }
> +
> +        if (!op->pb->virtual_parent || !op->pb->virtual_parent[0] ||
> +            !op->pb->chassis) {
> +            /* The virtual port is not claimed yet. */
> +            for (size_t i = 0; i < op->ldp->n_peer_ports; i++) {
> +                struct local_lport *peer = op->ldp->peer_ports[i].remote;
> +
> +                if (!is_ip4_in_router_network(peer, vip)) {
> +                    continue;
> +                }
> +
> +                ds_clear(match);
> +                ds_put_format(match, "outport == %s && "
> +                              REG_NEXT_HOP_IPV4 " == %s",
> +                              peer->json_key, vip_s);
> +
> +                const char *arp_actions = "eth.dst = 00:00:00:00:00:00; next;";
> +                ovn_ctrl_lflow_add_dp_key(
> +                    lflows, peer->ldp->datapath->tunnel_key,
> +                    S_ROUTER_IN_ARP_RESOLVE, 100,
> +                    ds_cstr(match), arp_actions,
> +                    &op->pb->header_.uuid, lflow_uuid_idx);
> +                break;
> +            }
> +        } else {
> +            struct local_lport *vp =
> +                local_datapath_get_lport(op->ldp, op->pb->virtual_parent);
> +            ovs_assert(vp);
> +            for (size_t i = 0; i < vp->lsp.n_addrs; i++) {
> +                bool found_vip_network = false;
> +                const char *ea_s = vp->lsp.addrs[i].ea_s;
> +                for (size_t j = 0; j < vp->ldp->n_peer_ports; j++) {
> +                    struct local_lport *peer = op->ldp->peer_ports[i].remote;
> +
> +                    if (!is_ip4_in_router_network(peer, vip)) {
> +                        continue;
> +                    }
> +
> +                    ds_clear(match);
> +                    ds_put_format(match, "outport == %s && "
> +                                  REG_NEXT_HOP_IPV4 " == %s",
> +                                  peer->json_key, vip_s);
> +
> +                    ds_clear(actions);
> +                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> +                    ovn_ctrl_lflow_add_dp_key(
> +                        lflows, peer->ldp->datapath->tunnel_key,
> +                        S_ROUTER_IN_ARP_RESOLVE, 100,
> +                        ds_cstr(match), ds_cstr(actions),
> +                        &op->pb->header_.uuid, lflow_uuid_idx);
> +                    found_vip_network = true;
> +                    break;
> +                }
> +
> +                if (found_vip_network) {
> +                    break;
> +                }
> +            }
> +        }
> +    } else if (op->peer) {
> +        /* This is a logical switch port that connects to a router. */
> +
> +        /* The peer of this switch port is the router port for which
> +         * we need to add logical flows such that it can resolve
> +         * ARP entries for all the other router ports connected to
> +         * the switch in question. */
> +        if (smap_get_bool(&op->peer->ldp->datapath->options,
> +                          "dynamic_neigh_routers", false)) {
> +            return;
> +        }
> +
> +        for (size_t i = 0; i < op->ldp->n_peer_ports; i++) {
> +            struct local_lport *router_port = op->ldp->peer_ports[i].remote;
> +            /* Skip the router port under consideration. */
> +            if (router_port == op->peer) {
> +               continue;
> +            }
> +
> +            if (router_port->lrp.networks.n_ipv4_addrs) {
> +                ds_clear(match);
> +                ds_put_format(match, "outport == %s && "
> +                              REG_NEXT_HOP_IPV4 " == ",
> +                              op->peer->json_key);
> +                op_put_v4_networks(match, router_port, false);
> +
> +                ds_clear(actions);
> +                ds_put_format(actions, "eth.dst = %s; next;",
> +                              router_port->lrp.networks.ea_s);
> +                ovn_ctrl_lflow_add_dp_key(
> +                    lflows, op->peer->ldp->datapath->tunnel_key,
> +                    S_ROUTER_IN_ARP_RESOLVE, 100,
> +                    ds_cstr(match), ds_cstr(actions),
> +                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            }
> +
> +            if (router_port->lrp.networks.n_ipv6_addrs) {
> +                ds_clear(match);
> +                ds_put_format(match, "outport == %s && "
> +                              REG_NEXT_HOP_IPV6 " == ",
> +                              op->peer->json_key);
> +                op_put_v6_networks(match, router_port);
> +
> +                ds_clear(actions);
> +                ds_put_format(actions, "eth.dst = %s; next;",
> +                              router_port->lrp.networks.ea_s);
> +                ovn_ctrl_lflow_add_dp_key(
> +                    lflows, op->peer->ldp->datapath->tunnel_key,
> +                    S_ROUTER_IN_ARP_RESOLVE, 100,
> +                    ds_cstr(match), ds_cstr(actions),
> +                    &op->pb->header_.uuid, lflow_uuid_idx);
> +            }
> +        }
> +    } else {
> +        for (size_t i = 0; i < op->lsp.n_addrs; i++) {
> +            const char *ea_s = op->lsp.addrs[i].ea_s;
> +            for (size_t j = 0; j < op->lsp.addrs[i].n_ipv4_addrs; j++) {
> +                ovs_be32 ip = op->lsp.addrs[i].ipv4_addrs[j].addr;
> +                for (size_t k = 0; k < op->ldp->n_peer_ports; k++) {
> +                    /* Get the Logical_Router_Port that the Logical_Switch_Port
> +                    * is connected to, as 'peer'. */
> +                    struct local_lport *peer = op->ldp->peer_ports[k].remote;
> +
> +                    if (!is_ip4_in_router_network(peer, ip)) {
> +                        continue;
> +                    }
> +
> +                    ds_clear(match);
> +                    ds_put_format(match, "outport == %s && "
> +                                REG_NEXT_HOP_IPV4 " == %s", peer->json_key,
> +                                op->lsp.addrs[i].ipv4_addrs[j].addr_s);
> +
> +                    ds_clear(actions);
> +                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> +                    ovn_ctrl_lflow_add_dp_key(
> +                        lflows, peer->ldp->datapath->tunnel_key,
> +                        S_ROUTER_IN_ARP_RESOLVE, 100,
> +                        ds_cstr(match), ds_cstr(actions),
> +                        &op->pb->header_.uuid, lflow_uuid_idx);
> +                }
> +            }
> +
> +            for (size_t j = 0; j < op->lsp.addrs[i].n_ipv6_addrs; j++) {
> +                for (size_t k = 0; k < op->ldp->n_peer_ports; k++) {
> +                    /* Get the Logical_Router_Port that the Logical_Switch_Port
> +                    * is connected to, as 'peer'. */
> +                    struct local_lport *peer = op->ldp->peer_ports[k].remote;
> +
> +                    if (!is_ip6_in_router_network(
> +                        peer, op->lsp.addrs[i].ipv6_addrs[j].addr)) {
> +                        continue;
> +                    }
> +
> +                    ds_clear(match);
> +                    ds_put_format(match, "outport == %s && "
> +                                REG_NEXT_HOP_IPV6 " == %s",
> +                                peer->json_key,
> +                                op->lsp.addrs[i].ipv6_addrs[j].addr_s);
> +
> +                    ds_clear(actions);
> +                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> +                    ovn_ctrl_lflow_add_dp_key(
> +                        lflows, peer->ldp->datapath->tunnel_key,
> +                        S_ROUTER_IN_ARP_RESOLVE, 100,
> +                        ds_cstr(match), ds_cstr(actions),
> +                        &op->pb->header_.uuid, lflow_uuid_idx);
> +                }
> +            }
> +        }
> +    }
> +}
> +
> +static void
> +skip_lport_from_conntrack(struct hmap *lflows, struct local_lport *op,
> +                          uint32_t *lflow_uuid_idx, enum ovn_stage in_stage,
> +                          enum ovn_stage out_stage, uint16_t priority,
> +                          struct ds *match)
> +{
> +    ds_clear(match);
> +    ds_put_format(match, "ip && inport == %s", op->json_key);
> +    ovn_ctrl_lflow_add_uuid(lflows, in_stage, priority,
> +                            ds_cstr(match), "next;",
> +                            &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +    ds_clear(match);
> +    ds_put_format(match, "ip && outport == %s", op->json_key);
> +    ovn_ctrl_lflow_add_uuid(lflows, out_stage, priority,
> +                            ds_cstr(match), "next;",
> +                            &op->pb->header_.uuid, lflow_uuid_idx);
> +}
> +
> +static void build_adm_ctrl_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_neigh_learning_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_ip_routing_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_ND_RA_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_dhcpv6_reply_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match);
> +static void build_ipv6_input_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_lrouter_nd_flow(struct hmap *lflows, struct local_lport *op,
> +                                  const struct uuid *flow_uuid,
> +                                  uint32_t *lflow_uuid_idx, const char *action,
> +                                  const char *ip_address,
> +                                  const char *sn_ip_address,
> +                                  const char *eth_addr,
> +                                  struct ds *extra_match, bool drop,
> +                                  uint16_t priority);
> +static void build_lrouter_bfd_flows(struct hmap *lflows,
> +                                    struct local_lport *op,
> +                                    uint32_t *lflow_uuid_idx);
> +static void build_lrouter_arp_flow(
> +    struct hmap *lflows, struct local_lport *,
> +    const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx,
> +    const char *ip_address, const char *eth_addr,
> +    struct ds *extra_match, bool drop, uint16_t priority);
> +static void build_lrouter_ipv4_ip_input(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_lrouter_force_snat_flows_op(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx,
> +    struct ds *match, struct ds *actions);
> +static void build_arp_resolve_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +static void build_egress_delivery_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions);
> +
> +static void add_route(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions,
> +    const char *lrp_addr_s, const char *network_s, int plen,
> +    const char *gateway, bool is_src_route, bool is_discard_route);
> +
> +/* Router port lflows. */
> +static void
> +build_lrouter_port_lflows(struct hmap *lflows, struct local_lport *op)
> +{
> +    uint32_t lflow_uuid_idx = 1;
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    build_adm_ctrl_flows_for_lrouter_port(lflows, op,
> +                                          &lflow_uuid_idx, &match, &actions);
> +    build_neigh_learning_flows_for_lrouter_port(lflows, op,
> +                                                &lflow_uuid_idx, &match,
> +                                                &actions);
> +    build_ip_routing_flows_for_lrouter_port(lflows, op,
> +                                            &lflow_uuid_idx, &match, &actions);
> +    build_ND_RA_flows_for_lrouter_port(lflows, op,
> +                                       &lflow_uuid_idx, &match, &actions);
> +    build_dhcpv6_reply_flows_for_lrouter_port(lflows, op,
> +                                              &lflow_uuid_idx, &match);
> +    build_ipv6_input_flows_for_lrouter_port(lflows, op,
> +                                            &lflow_uuid_idx, &match, &actions);
> +    build_lrouter_ipv4_ip_input(lflows, op, &lflow_uuid_idx,
> +                                &match, &actions);
> +    build_lrouter_force_snat_flows_op(lflows, op, &lflow_uuid_idx,
> +                                      &match, &actions);
> +    build_arp_resolve_flows_for_lrouter_port(lflows, op, &lflow_uuid_idx,
> +                                             &match, &actions);
> +    build_egress_delivery_flows_for_lrouter_port(lflows, op,
> +                                                 &lflow_uuid_idx,
> +                                                 &match, &actions);
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
> +/* Logical router ingress Table 0: L2 Admission Control
> + * This table drops packets that the router shouldn’t see at all based
> + * on their Ethernet headers.
> + */
> +static void
> +build_adm_ctrl_flows_for_lrouter_port(struct hmap *lflows,
> +                                      struct local_lport *op,
> +                                      uint32_t *lflow_uuid_idx,
> +                                      struct ds *match, struct ds *actions)
> +{
> +#if 0
> +TODO:
> +    if (!lrport_is_enabled(op->nbrp)) {
> +        /* Drop packets from disabled logical ports (since logical flow
> +            * tables are default-drop). */
> +        return;
> +    }
> +#endif
> +
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        /* No ingress packets should be received on a chassisredirect port. */
> +        return;
> +    }
> +
> +    /* Store the ethernet address of the port receiving the packet.
> +     * This will save us from having to match on inport further down in
> +     * the pipeline.
> +     */
> +    ds_clear(actions);
> +    ds_put_format(actions, REG_INPORT_ETH_ADDR " = %s; next;",
> +                  op->lrp.networks.ea_s);
> +
> +    ds_clear(match);
> +    ds_put_format(match, "eth.mcast && inport == %s", op->json_key);
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ADMISSION, 50, ds_cstr(match),
> +                            ds_cstr(actions), &op->pb->header_.uuid,
> +                            lflow_uuid_idx);
> +
> +    ds_clear(match);
> +    ds_put_format(match, "eth.dst == %s && inport == %s",
> +                  op->lrp.networks.ea_s, op->json_key);
> +    if (op->lrp.is_l3dgw_port) {
> +        /* Traffic with eth.dst = l3dgw_port->lrp_networks.ea_s
> +         * should only be received on the gateway chassis. */
> +        ds_put_format(match, " && is_chassis_resident(%s)",
> +                      op->lrp.chassis_redirect_json_key);
> +    }
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ADMISSION, 50, ds_cstr(match),
> +                            ds_cstr(actions), &op->pb->header_.uuid,
> +                            lflow_uuid_idx);
> +}
> +
> +/* Logical router ingress Table 1: Neighbor lookup lflows
> + * for logical router ports. */
> +static void
> +build_neigh_learning_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx,
> +    struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    bool learn_from_arp_request = is_learn_from_arp_request(op->pb->datapath);
> +
> +    /* Check if we need to learn mac-binding from ARP requests. */
> +    for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +        if (!learn_from_arp_request) {
> +            /* ARP request to this address should always get learned,
> +                * so add a priority-110 flow to set
> +                * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT to 1. */
> +            ds_clear(match);
> +            ds_put_format(match,
> +                          "inport == %s && arp.spa == %s/%u && "
> +                          "arp.tpa == %s && arp.op == 1",
> +                          op->json_key,
> +                          op->lrp.networks.ipv4_addrs[i].network_s,
> +                          op->lrp.networks.ipv4_addrs[i].plen,
> +                          op->lrp.networks.ipv4_addrs[i].addr_s);
> +            if (op->lrp.is_l3dgw_port) {
> +                ds_put_format(match, " && is_chassis_resident(%s)",
> +                              op->lrp.chassis_redirect_json_key);
> +            }
> +            const char *actions_s = REGBIT_LOOKUP_NEIGHBOR_RESULT
> +                                    " = lookup_arp(inport, arp.spa, arp.sha); "
> +                                    REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1;"
> +                                    " next;";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 110,
> +                                    ds_cstr(match), actions_s,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +        ds_clear(match);
> +        ds_put_format(match,
> +                      "inport == %s && arp.spa == %s/%u && arp.op == 1",
> +                      op->json_key,
> +                      op->lrp.networks.ipv4_addrs[i].network_s,
> +                      op->lrp.networks.ipv4_addrs[i].plen);
> +        if (op->lrp.is_l3dgw_port) {
> +            ds_put_format(match, " && is_chassis_resident(%s)",
> +                          op->lrp.chassis_redirect_json_key);
> +        }
> +        ds_clear(actions);
> +        ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> +                        " = lookup_arp(inport, arp.spa, arp.sha); %snext;",
> +                        learn_from_arp_request ? "" :
> +                        REGBIT_LOOKUP_NEIGHBOR_IP_RESULT
> +                        " = lookup_arp_ip(inport, arp.spa); ");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +/* Logical router ingress table IP_ROUTING : IP Routing.
> + *
> + * A packet that arrives at this table is an IP packet that should be
> + * routed to the address in 'ip[46].dst'.
> + *
> + * For regular routes without ECMP, table IP_ROUTING sets outport to the
> + * correct output port, eth.src to the output port's MAC address, and
> + * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 to the next-hop IP address
> + * (leaving 'ip[46].dst', the packet’s final destination, unchanged), and
> + * advances to the next table.
> + *
> + * For ECMP routes, i.e. multiple routes with same policy and prefix, table
> + * IP_ROUTING remembers ECMP group id and selects a member id, and advances
> + * to table IP_ROUTING_ECMP, which sets outport, eth.src and
> + * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 for the selected ECMP member.
> + */
> +static void
> +build_ip_routing_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +        add_route(lflows, op, lflow_uuid_idx, match, actions,
> +                  op->lrp.networks.ipv4_addrs[i].addr_s,
> +                  op->lrp.networks.ipv4_addrs[i].network_s,
> +                  op->lrp.networks.ipv4_addrs[i].plen, NULL, false, false);
> +    }
> +
> +    for (int i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        add_route(lflows, op, lflow_uuid_idx, match, actions,
> +                  op->lrp.networks.ipv6_addrs[i].addr_s,
> +                  op->lrp.networks.ipv6_addrs[i].network_s,
> +                  op->lrp.networks.ipv6_addrs[i].plen, NULL, false, false);
> +    }
> +}
> +
> +/* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: IPv6 Router
> + * Adv (RA) options and response. */
> +static void
> +build_ND_RA_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    if (!op->lrp.networks.n_ipv6_addrs) {
> +        return;
> +    }
> +
> +    const char *address_mode = smap_get(&op->pb->options,
> +                                        "ipv6_ra_address_mode");
> +    if (!address_mode) {
> +        return;
> +    }
> +
> +    if (strcmp(address_mode, "slaac") &&
> +        strcmp(address_mode, "dhcpv6_stateful") &&
> +        strcmp(address_mode, "dhcpv6_stateless")) {
> +        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +        VLOG_WARN_RL(&rl, "Invalid address mode [%s] defined",
> +                     address_mode);
> +        return;
> +    }
> +
> +    ds_clear(match);
> +    ds_put_format(match, "inport == %s && ip6.dst == ff02::2 && nd_rs",
> +                  op->json_key);
> +    ds_clear(actions);
> +
> +    const char *mtu_s = smap_get(&op->pb->options, "ipv6_ra_mtu");
> +
> +    /* As per RFC 2460, 1280 is minimum IPv6 MTU. */
> +    uint32_t mtu = (mtu_s && atoi(mtu_s) >= 1280) ? atoi(mtu_s) : 0;
> +
> +    ds_put_format(actions, REGBIT_ND_RA_OPTS_RESULT" = put_nd_ra_opts("
> +                  "addr_mode = \"%s\", slla = %s",
> +                  address_mode, op->lrp.networks.ea_s);
> +    if (mtu > 0) {
> +        ds_put_format(actions, ", mtu = %u", mtu);
> +    }
> +
> +    const char *prf = smap_get_def(&op->pb->options, "ipv6_ra_prf", "MEDIUM");
> +    if (strcmp(prf, "MEDIUM")) {
> +        ds_put_format(actions, ", router_preference = \"%s\"", prf);
> +    }
> +
> +    bool add_rs_response_flow = false;
> +
> +    for (size_t i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        if (in6_is_lla(&op->lrp.networks.ipv6_addrs[i].network)) {
> +            continue;
> +        }
> +
> +        ds_put_format(actions, ", prefix = %s/%u",
> +                      op->lrp.networks.ipv6_addrs[i].network_s,
> +                      op->lrp.networks.ipv6_addrs[i].plen);
> +
> +        add_rs_response_flow = true;
> +    }
> +
> +    if (add_rs_response_flow) {
> +        ds_put_cstr(actions, "); next;");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ND_RA_OPTIONS,
> +                                50, ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        ds_clear(actions);
> +        ds_clear(match);
> +        ds_put_format(match, "inport == %s && ip6.dst == ff02::2 && "
> +                      "nd_ra && "REGBIT_ND_RA_OPTS_RESULT, op->json_key);
> +
> +        char ip6_str[INET6_ADDRSTRLEN + 1];
> +        struct in6_addr lla;
> +        in6_generate_lla(op->lrp.networks.ea, &lla);
> +        memset(ip6_str, 0, sizeof(ip6_str));
> +        ipv6_string_mapped(ip6_str, &lla);
> +        ds_put_format(actions, "eth.dst = eth.src; eth.src = %s; "
> +                      "ip6.dst = ip6.src; ip6.src = %s; "
> +                      "outport = inport; flags.loopback = 1; "
> +                      "output;",
> +                      op->lrp.networks.ea_s, ip6_str);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ND_RA_RESPONSE, 50,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_dhcpv6_reply_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    for (size_t i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        ds_clear(match);
> +        ds_put_format(match, "ip6.dst == %s && udp.src == 547 &&"
> +                      " udp.dst == 546",
> +                      op->lrp.networks.ipv6_addrs[i].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 100,
> +                                ds_cstr(match),
> +                                "reg0 = 0; handle_dhcpv6_reply;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +}
> +
> +static void
> +build_ipv6_input_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions)
> +{
> +    /* No ingress packets are accepted on a chassisredirect
> +     * port, so no need to program flows for that port. */
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    if (op->lrp.networks.n_ipv6_addrs) {
> +        /* ICMPv6 echo reply.  These flows reply to echo requests
> +            * received for the router's IP address. */
> +        ds_clear(match);
> +        ds_put_cstr(match, "ip6.dst == ");
> +        op_put_v6_networks(match, op);
> +        ds_put_cstr(match, " && icmp6.type == 128 && icmp6.code == 0");
> +
> +        const char *lrp_actions =
> +                    "ip6.dst <-> ip6.src; "
> +                    "ip.ttl = 255; "
> +                    "icmp6.type = 129; "
> +                    "flags.loopback = 1; "
> +                    "next; ";
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 90,
> +                                ds_cstr(match), lrp_actions,
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    /* ND reply.  These flows reply to ND solicitations for the
> +     * router's own IP address. */
> +    for (size_t i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        ds_clear(match);
> +        if (op->lrp.is_l3dgw_port && op->lrp.chassis_redirect_json_key) {
> +            /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s
> +                * should only be sent from the gateway chassi, so that
> +                * upstream MAC learning points to the gateway chassis.
> +                * Also need to avoid generation of multiple ND replies
> +                * from different chassis. */
> +            ds_put_format(match, "is_chassis_resident(%s)",
> +                          op->lrp.chassis_redirect_json_key);
> +        }
> +
> +        build_lrouter_nd_flow(lflows, op, &op->pb->header_.uuid,
> +                              lflow_uuid_idx, "nd_na_router",
> +                              op->lrp.networks.ipv6_addrs[i].addr_s,
> +                              op->lrp.networks.ipv6_addrs[i].sn_addr_s,
> +                              REG_INPORT_ETH_ADDR, match, false, 90);
> +    }
> +
> +    /* UDP/TCP/SCTP port unreachable */
> +    if (op->type != LP_L3GATEWAY && !op->lrp.dp_has_l3dgw_port) {
> +        for (int i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +            ds_clear(match);
> +            ds_put_format(match,
> +                          "ip6 && ip6.dst == %s && !ip.later_frag && tcp",
> +                          op->lrp.networks.ipv6_addrs[i].addr_s);
> +            const char *action = "tcp_reset {"
> +                                 "eth.dst <-> eth.src; "
> +                                 "ip6.dst <-> ip6.src; "
> +                                 "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip6 && ip6.dst == %s && !ip.later_frag && sctp",
> +                            op->lrp.networks.ipv6_addrs[i].addr_s);
> +            action = "sctp_abort {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip6.dst <-> ip6.src; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip6 && ip6.dst == %s && !ip.later_frag && udp",
> +                            op->lrp.networks.ipv6_addrs[i].addr_s);
> +            action = "icmp6 {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip6.dst <-> ip6.src; "
> +                        "ip.ttl = 255; "
> +                        "icmp6.type = 1; "
> +                        "icmp6.code = 4; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip6 && ip6.dst == %s && !ip.later_frag",
> +                            op->lrp.networks.ipv6_addrs[i].addr_s);
> +            action = "icmp6 {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip6.dst <-> ip6.src; "
> +                        "ip.ttl = 255; "
> +                        "icmp6.type = 1; "
> +                        "icmp6.code = 3; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    70, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +
> +    /* ICMPv6 time exceeded */
> +    for (int i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        /* skip link-local address */
> +        if (in6_is_lla(&op->lrp.networks.ipv6_addrs[i].network)) {
> +            continue;
> +        }
> +
> +        ds_clear(match);
> +        ds_clear(actions);
> +
> +        ds_put_format(match,
> +                      "inport == %s && ip6 && "
> +                      "ip6.src == %s/%d && "
> +                      "ip.ttl == {0, 1} && !ip.later_frag",
> +                      op->json_key,
> +                      op->lrp.networks.ipv6_addrs[i].network_s,
> +                      op->lrp.networks.ipv6_addrs[i].plen);
> +        ds_put_format(actions,
> +                      "icmp6 {"
> +                      "eth.dst <-> eth.src; "
> +                      "ip6.dst = ip6.src; "
> +                      "ip6.src = %s; "
> +                      "ip.ttl = 255; "
> +                      "icmp6.type = 3; /* Time exceeded */ "
> +                      "icmp6.code = 0; /* TTL exceeded in transit */ "
> +                      "next; };",
> +                      op->lrp.networks.ipv6_addrs[i].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 40,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +}
> +
> +/* Builds the logical flow that replies to NS requests for an 'ip_address'
> + * owned by the router. The flow is inserted in table S_ROUTER_IN_IP_INPUT
> + * with the given priority. If 'sn_ip_address' is non-NULL, requests are
> + * restricted only to packets with IP destination 'ip_address' or
> + * 'sn_ip_address'.
> + */
> +static void
> +build_lrouter_nd_flow(struct hmap *lflows, struct local_lport *op,
> +                      const struct uuid *flow_uuid, uint32_t *lflow_uuid_idx,
> +                      const char *action, const char *ip_address,
> +                      const char *sn_ip_address, const char *eth_addr,
> +                      struct ds *extra_match, bool drop, uint16_t priority)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    if (op) {
> +        ds_put_format(&match, "inport == %s && ", op->json_key);
> +    }
> +
> +    if (sn_ip_address) {
> +        ds_put_format(&match, "ip6.dst == {%s, %s} && ",
> +                      ip_address, sn_ip_address);
> +    }
> +
> +    ds_put_format(&match, "nd_ns && nd.target == %s", ip_address);
> +
> +    if (extra_match && ds_last(extra_match) != EOF) {
> +        ds_put_format(&match, " && %s", ds_cstr(extra_match));
> +    }
> +
> +    if (drop) {
> +        ds_put_format(&actions, "drop;");
> +    } else {
> +        ds_put_format(&actions,
> +                      "%s { "
> +                        "eth.src = %s; "
> +                        "ip6.src = %s; "
> +                        "nd.target = %s; "
> +                        "nd.tll = %s; "
> +                        "outport = inport; "
> +                        "flags.loopback = 1; "
> +                        "output; "
> +                      "};",
> +                      action,
> +                      eth_addr,
> +                      ip_address,
> +                      ip_address,
> +                      eth_addr);
> +    }
> +
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, priority,
> +                            ds_cstr(&match), ds_cstr(&actions),
> +                            flow_uuid, lflow_uuid_idx);
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
> +/* Logical router ingress table 3: IP Input for IPv4. */
> +static void
> +build_lrouter_ipv4_ip_input(struct hmap *lflows, struct local_lport *op,
> +                            uint32_t *lflow_uuid_idx, struct ds *match,
> +                            struct ds *actions)
> +{
> +    /* No ingress packets are accepted on a chassisredirect
> +     * port, so no need to program flows for that port. */
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    if (op->lrp.networks.n_ipv4_addrs) {
> +        /* L3 admission control: drop packets that originate from an
> +         * IPv4 address owned by the router or a broadcast address
> +         * known to the router (priority 100). */
> +        ds_clear(match);
> +        ds_put_cstr(match, "ip4.src == ");
> +        op_put_v4_networks(match, op, true);
> +        ds_put_cstr(match, " && "REGBIT_EGRESS_LOOPBACK" == 0");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 100,
> +                                ds_cstr(match), "drop;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +        /* ICMP echo reply.  These flows reply to ICMP echo requests
> +         * received for the router's IP address. Since packets only
> +         * get here as part of the logical router datapath, the inport
> +         * (i.e. the incoming locally attached net) does not matter.
> +         * The ip.ttl also does not matter (RFC1812 section 4.2.2.9) */
> +        ds_clear(match);
> +        ds_put_cstr(match, "ip4.dst == ");
> +        op_put_v4_networks(match, op, false);
> +        ds_put_cstr(match, " && icmp4.type == 8 && icmp4.code == 0");
> +
> +        const char * icmp_actions = "ip4.dst <-> ip4.src; "
> +                        "ip.ttl = 255; "
> +                        "icmp4.type = 0; "
> +                        "flags.loopback = 1; "
> +                        "next; ";
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 90,
> +                                ds_cstr(match), icmp_actions,
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    /* BFD msg handling */
> +    build_lrouter_bfd_flows(lflows, op, lflow_uuid_idx);
> +
> +    /* ICMP time exceeded */
> +    for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +
> +        ds_put_format(match,
> +                      "inport == %s && ip4 && "
> +                      "ip.ttl == {0, 1} && !ip.later_frag", op->json_key);
> +        ds_put_format(actions,
> +                      "icmp4 {"
> +                      "eth.dst <-> eth.src; "
> +                      "icmp4.type = 11; /* Time exceeded */ "
> +                      "icmp4.code = 0; /* TTL exceeded in transit */ "
> +                      "ip4.dst = ip4.src; "
> +                      "ip4.src = %s; "
> +                      "ip.ttl = 255; "
> +                      "next; };",
> +                      op->lrp.networks.ipv4_addrs[i].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 40,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    /* ARP reply.  These flows reply to ARP requests for the router's own
> +     * IP address. */
> +    for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +        ds_clear(match);
> +        ds_put_format(match, "arp.spa == %s/%u",
> +                      op->lrp.networks.ipv4_addrs[i].network_s,
> +                      op->lrp.networks.ipv4_addrs[i].plen);
> +
> +        if (op->lrp.dp_has_l3dgw_port && op->peer
> +                && op->lrp.peer_dp_has_localnet_ports) {
> +            bool add_chassis_resident_check = false;
> +            if (op->lrp.is_l3dgw_port) {
> +                /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s
> +                 * should only be sent from the gateway chassis, so that
> +                 * upstream MAC learning points to the gateway chassis.
> +                 * Also need to avoid generation of multiple ARP responses
> +                 * from different chassis. */
> +                add_chassis_resident_check = true;
> +            } else {
> +                /* Check if the option 'reside-on-redirect-chassis'
> +                    * is set to true on the router port. If set to true
> +                    * and if peer's logical switch has a localnet port, it
> +                    * means the router pipeline for the packets from
> +                    * peer's logical switch is be run on the chassis
> +                    * hosting the gateway port and it should reply to the
> +                    * ARP requests for the router port IPs.
> +                    */
> +                add_chassis_resident_check = smap_get_bool(
> +                    &op->pb->options,
> +                    "reside-on-redirect-chassis", false);
> +            }
> +
> +            if (add_chassis_resident_check) {
> +                ds_put_format(match, " && is_chassis_resident(%s)",
> +                              op->lrp.chassis_redirect_json_key);
> +            }
> +        }
> +
> +        build_lrouter_arp_flow(lflows, op, &op->pb->header_.uuid,
> +                               lflow_uuid_idx,
> +                               op->lrp.networks.ipv4_addrs[i].addr_s,
> +                               REG_INPORT_ETH_ADDR, match, false, 90);
> +    }
> +
> +    if (op->type != LP_L3GATEWAY && !op->lrp.dp_has_l3dgw_port) {
> +        /* UDP/TCP/SCTP port unreachable. */
> +        for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +            ds_clear(match);
> +            ds_put_format(match,
> +                          "ip4 && ip4.dst == %s && !ip.later_frag && udp",
> +                          op->lrp.networks.ipv4_addrs[i].addr_s);
> +            const char *action = "icmp4 {"
> +                                    "eth.dst <-> eth.src; "
> +                                    "ip4.dst <-> ip4.src; "
> +                                    "ip.ttl = 255; "
> +                                    "icmp4.type = 3; "
> +                                    "icmp4.code = 3; "
> +                                    "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip4 && ip4.dst == %s && !ip.later_frag && tcp",
> +                            op->lrp.networks.ipv4_addrs[i].addr_s);
> +            action = "tcp_reset {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip4.dst <-> ip4.src; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip4 && ip4.dst == %s && !ip.later_frag && sctp",
> +                            op->lrp.networks.ipv4_addrs[i].addr_s);
> +            action = "sctp_abort {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip4.dst <-> ip4.src; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    80, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +            ds_clear(match);
> +            ds_put_format(match,
> +                            "ip4 && ip4.dst == %s && !ip.later_frag",
> +                            op->lrp.networks.ipv4_addrs[i].addr_s);
> +            action = "icmp4 {"
> +                        "eth.dst <-> eth.src; "
> +                        "ip4.dst <-> ip4.src; "
> +                        "ip.ttl = 255; "
> +                        "icmp4.type = 3; "
> +                        "icmp4.code = 2; "
> +                        "next; };";
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT,
> +                                    70, ds_cstr(match), action,
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +}
> +
> +static void
> +build_lrouter_bfd_flows(struct hmap *lflows, struct local_lport *op,
> +                        uint32_t *lflow_uuid_idx)
> +{
> +    if (!op->lrp.has_bfd) {
> +        return;
> +    }
> +
> +    struct ds ip_list = DS_EMPTY_INITIALIZER;
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +
> +    if (op->lrp.networks.n_ipv4_addrs) {
> +        op_put_v4_networks(&ip_list, op, false);
> +        ds_put_format(&match, "ip4.src == %s && udp.dst == 3784",
> +                      ds_cstr(&ip_list));
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 110,
> +                                ds_cstr(&match), "next; ",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        ds_clear(&match);
> +        ds_put_format(&match, "ip4.dst == %s && udp.dst == 3784",
> +                      ds_cstr(&ip_list));
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 110,
> +                                ds_cstr(&match), "handle_bfd_msg(); ",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    if (op->lrp.networks.n_ipv6_addrs) {
> +        ds_clear(&ip_list);
> +        ds_clear(&match);
> +
> +        op_put_v6_networks(&ip_list, op);
> +        ds_put_format(&match, "ip6.src == %s && udp.dst == 3784",
> +                      ds_cstr(&ip_list));
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 110,
> +                                ds_cstr(&match), "next; ",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        ds_clear(&match);
> +        ds_put_format(&match, "ip6.dst == %s && udp.dst == 3784",
> +                      ds_cstr(&ip_list));
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, 110,
> +                                ds_cstr(&match), "handle_bfd_msg(); ",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    ds_destroy(&ip_list);
> +    ds_destroy(&match);
> +}
> +
> +/* Builds the logical flow that replies to ARP requests for an 'ip_address'
> + * owned by the router. The flow is inserted in table S_ROUTER_IN_IP_INPUT
> + * with the given priority.
> + */
> +static void
> +build_lrouter_arp_flow(struct hmap *lflows, struct local_lport *op,
> +                       const struct uuid *lflow_uuid, uint32_t *lflow_uuid_idx,
> +                       const char *ip_address, const char *eth_addr,
> +                       struct ds *extra_match, bool drop, uint16_t priority)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +
> +    if (op) {
> +        ds_put_format(&match, "inport == %s && ", op->json_key);
> +    }
> +
> +    ds_put_format(&match, "arp.op == 1 && arp.tpa == %s", ip_address);
> +
> +    if (extra_match && ds_last(extra_match) != EOF) {
> +        ds_put_format(&match, " && %s", ds_cstr(extra_match));
> +    }
> +    if (drop) {
> +        ds_put_format(&actions, "drop;");
> +    } else {
> +        ds_put_format(&actions,
> +                      "eth.dst = eth.src; "
> +                      "eth.src = %s; "
> +                      "arp.op = 2; /* ARP reply */ "
> +                      "arp.tha = arp.sha; "
> +                      "arp.sha = %s; "
> +                      "arp.tpa <-> arp.spa; "
> +                      "outport = inport; "
> +                      "flags.loopback = 1; "
> +                      "output;",
> +                      eth_addr,
> +                      eth_addr);
> +    }
> +
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_INPUT, priority,
> +                            ds_cstr(&match), ds_cstr(&actions),
> +                            lflow_uuid, lflow_uuid_idx);
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
> +static void
> +build_lrouter_force_snat_flows_op(struct hmap *lflows, struct local_lport *op,
> +                                  uint32_t *lflow_uuid_idx, struct ds *match,
> +                                  struct ds *actions)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +    if (!op->peer || !smap_get_bool(&op->pb->datapath->options,
> +                                    "lb-force-snat-router-ip", false)) {
> +        return;
> +    }
> +
> +    if (op->lrp.networks.n_ipv4_addrs) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +
> +        ds_put_format(match, "inport == %s && ip4.dst == %s",
> +                      op->json_key, op->lrp.networks.ipv4_addrs[0].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_UNSNAT, 110,
> +                                ds_cstr(match), "ct_snat;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +        ds_clear(match);
> +
> +        /* Higher priority rules to force SNAT with the router port ip.
> +         * This only takes effect when the packet has already been
> +         * load balanced once. */
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> +                      "outport == %s", op->json_key);
> +        ds_put_format(actions, "ct_snat(%s);",
> +                      op->lrp.networks.ipv4_addrs[0].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_OUT_SNAT, 110,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        if (op->lrp.networks.n_ipv4_addrs > 1) {
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> +                              "multiple IPv4 addresses.  Only the first "
> +                              "IP [%s] is considered as SNAT for load "
> +                              "balancer", op->json_key,
> +                              op->lrp.networks.ipv4_addrs[0].addr_s);
> +        }
> +    }
> +
> +    /* op->lrp.networks.ipv6_addrs will always have LLA and that will be
> +     * last in the list. So add the flows only if n_ipv6_addrs > 1. */
> +    if (op->lrp.networks.n_ipv6_addrs > 1) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +
> +        ds_put_format(match, "inport == %s && ip6.dst == %s",
> +                      op->json_key, op->lrp.networks.ipv6_addrs[0].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_UNSNAT, 110,
> +                                ds_cstr(match), "ct_snat;",
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +
> +        ds_clear(match);
> +
> +        /* Higher priority rules to force SNAT with the router port ip.
> +         * This only takes effect when the packet has already been
> +         * load balanced once. */
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> +                      "outport == %s", op->json_key);
> +        ds_put_format(actions, "ct_snat(%s);",
> +                      op->lrp.networks.ipv6_addrs[0].addr_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_OUT_SNAT, 110,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +        if (op->lrp.networks.n_ipv6_addrs > 2) {
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> +                              "multiple IPv6 addresses.  Only the first "
> +                              "IP [%s] is considered as SNAT for load "
> +                              "balancer", op->json_key,
> +                              op->lrp.networks.ipv6_addrs[0].addr_s);
> +        }
> +    }
> +}
> +
> +/* Local router ingress table ARP_RESOLVE: ARP Resolution.
> + *
> + * Any unicast packet that reaches this table is an IP packet whose
> + * next-hop IP address is in REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6
> + * (ip4.dst/ipv6.dst is the final destination).
> + * This table resolves the IP address in
> + * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 into an output port in outport and
> + * an Ethernet address in eth.dst.
> + */
> +static void
> +build_arp_resolve_flows_for_lrouter_port(
> +        struct hmap *lflows, struct local_lport *op, uint32_t *lflow_uuid_idx,
> +        struct ds *match, struct ds *actions)
> +{
> +    /* This is a logical router port. If next-hop IP address in
> +     * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 matches IP address of this
> +     * router port, then the packet is intended to eventually be sent
> +     * to this logical port. Set the destination mac address using
> +     * this port's mac address.
> +     *
> +     * The packet is still in peer's logical pipeline. So the match
> +     * should be on peer's outport. */
> +    if (op->peer && !op->peer->ldp->is_switch) {
> +        /* Both the peer's are router ports. */
> +        if (op->peer->lrp.networks.n_ipv4_addrs) {
> +            ds_clear(match);
> +            ds_put_format(match, "outport == %s && "
> +                          REG_NEXT_HOP_IPV4 " == ",
> +                          op->json_key);
> +            op_put_v4_networks(match, op->peer, false);
> +
> +            ds_clear(actions);
> +            ds_put_format(actions, "eth.dst = %s; next;",
> +                          op->peer->lrp.networks.ea_s);
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ARP_RESOLVE, 100,
> +                                    ds_cstr(match), ds_cstr(actions),
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +
> +        if (op->peer->lrp.networks.n_ipv6_addrs) {
> +            ds_clear(match);
> +            ds_put_format(match, "outport == %s && "
> +                          REG_NEXT_HOP_IPV4 " == ",
> +                          op->json_key);
> +            op_put_v6_networks(match, op->peer);
> +
> +            ds_clear(actions);
> +            ds_put_format(actions, "eth.dst = %s; next;",
> +                          op->peer->lrp.networks.ea_s);
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ARP_RESOLVE, 100,
> +                               ds_cstr(match), ds_cstr(actions),
> +                               &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        const char *redir_type = smap_get(&op->pb->options, "redirect-type");
> +        const char *gw_port = smap_get(&op->pb->options, "distributed-port");
> +        if (redir_type && gw_port && !strcasecmp(redir_type, "bridged")) {
> +            /* Packet is on a non gateway chassis and
> +             * has an unresolved ARP on a network behind gateway
> +             * chassis attached router port. Since, redirect type
> +             * is "bridged", instead of calling "get_arp"
> +             * on this node, we will redirect the packet to gateway
> +             * chassis, by setting destination mac router port mac.*/
> +            ds_clear(match);
> +            ds_put_format(match, "outport == \"%s\" && "
> +                          "!is_chassis_resident(%s)", gw_port, op->json_key);
> +            ds_clear(actions);
> +            ds_put_format(actions, "eth.dst = %s; next;",
> +                          op->lrp.networks.ea_s);
> +
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_ARP_RESOLVE, 50,
> +                                    ds_cstr(match), ds_cstr(actions),
> +                                    &op->pb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +}
> +
> +/* Logical router egress table DELIVERY: Delivery (priority 100-110).
> + *
> + * Priority 100 rules deliver packets to enabled logical ports.
> + * Priority 110 rules match multicast packets and update the source
> + * mac before delivering to enabled logical ports. IP multicast traffic
> + * bypasses S_ROUTER_IN_IP_ROUTING route lookups.
> + */
> +static void
> +build_egress_delivery_flows_for_lrouter_port(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions)
> +{
> +    if (op->type == LP_CHASSISREDIRECT) {
> +        return;
> +    }
> +
> +#if 0
> +TODO
> +    if (!lrport_is_enabled(op->nbrp)) {
> +        /* Drop packets to disabled logical ports (since logical flow
> +            * tables are default-drop). */
> +        return;
> +    }
> +#endif
> +
> +    /* If multicast relay is enabled then also adjust source mac for IP
> +        * multicast traffic.
> +        */
> +    if (smap_get_bool(&op->pb->datapath->options, "mcast-relay", false)) {
> +        ds_clear(match);
> +        ds_clear(actions);
> +        ds_put_format(match, "(ip4.mcast || ip6.mcast) && outport == %s",
> +                      op->json_key);
> +        ds_put_format(actions, "eth.src = %s; output;", op->lrp.networks.ea_s);
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_OUT_DELIVERY, 110,
> +                                ds_cstr(match), ds_cstr(actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    ds_clear(match);
> +    ds_put_format(match, "outport == %s", op->json_key);
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_OUT_DELIVERY, 100,
> +                            ds_cstr(match), "output;",
> +                            &op->pb->header_.uuid, lflow_uuid_idx);
> +}
> +
> +/* lrouter util functions. */
> +static void
> +op_put_v4_networks(struct ds *ds, const struct local_lport *op,
> +                   bool add_bcast)
> +{
> +    if (!add_bcast && op->lrp.networks.n_ipv4_addrs == 1) {
> +        ds_put_format(ds, "%s", op->lrp.networks.ipv4_addrs[0].addr_s);
> +        return;
> +    }
> +
> +    ds_put_cstr(ds, "{");
> +    for (int i = 0; i < op->lrp.networks.n_ipv4_addrs; i++) {
> +        ds_put_format(ds, "%s, ", op->lrp.networks.ipv4_addrs[i].addr_s);
> +        if (add_bcast) {
> +            ds_put_format(ds, "%s, ", op->lrp.networks.ipv4_addrs[i].bcast_s);
> +        }
> +    }
> +    ds_chomp(ds, ' ');
> +    ds_chomp(ds, ',');
> +    ds_put_cstr(ds, "}");
> +}
> +
> +static void
> +op_put_v6_networks(struct ds *ds, const struct local_lport *op)
> +{
> +    if (op->lrp.networks.n_ipv6_addrs == 1) {
> +        ds_put_format(ds, "%s", op->lrp.networks.ipv6_addrs[0].addr_s);
> +        return;
> +    }
> +
> +    ds_put_cstr(ds, "{");
> +    for (int i = 0; i < op->lrp.networks.n_ipv6_addrs; i++) {
> +        ds_put_format(ds, "%s, ", op->lrp.networks.ipv6_addrs[i].addr_s);
> +    }
> +    ds_chomp(ds, ' ');
> +    ds_chomp(ds, ',');
> +    ds_put_cstr(ds, "}");
> +}
> +
> +static void
> +build_route_match(const struct local_lport *op_inport, const char *network_s,
> +                  int plen, bool is_src_route, bool is_ipv4, struct ds *match,
> +                  uint16_t *priority)
> +{
> +    const char *dir;
> +    /* The priority here is calculated to implement longest-prefix-match
> +     * routing. */
> +    if (is_src_route) {
> +        dir = "src";
> +        *priority = plen * 2;
> +    } else {
> +        dir = "dst";
> +        *priority = (plen * 2) + 1;
> +    }
> +
> +    if (op_inport) {
> +        ds_put_format(match, "inport == %s && ", op_inport->json_key);
> +    }
> +    ds_put_format(match, "ip%s.%s == %s/%d", is_ipv4 ? "4" : "6", dir,
> +                  network_s, plen);
> +}
> +
> +static void
> +add_route(
> +    struct hmap *lflows, struct local_lport *op,
> +    uint32_t *lflow_uuid_idx, struct ds *match, struct ds *actions,
> +    const char *lrp_addr_s, const char *network_s, int plen,
> +    const char *gateway, bool is_src_route, bool is_discard_route)
> +{
> +    bool is_ipv4 = strchr(network_s, '.') ? true : false;
> +    uint16_t priority;
> +    const struct local_lport *op_inport = NULL;
> +
> +    /* IPv6 link-local addresses must be scoped to the local router port. */
> +    if (!is_ipv4) {
> +        struct in6_addr network;
> +        ovs_assert(ipv6_parse(network_s, &network));
> +        if (in6_is_lla(&network)) {
> +            op_inport = op;
> +        }
> +    }
> +
> +    ds_clear(match);
> +    ds_clear(actions);
> +
> +    build_route_match(op_inport, network_s, plen, is_src_route, is_ipv4,
> +                      match, &priority);
> +
> +    struct ds common_actions = DS_EMPTY_INITIALIZER;
> +
> +    if (is_discard_route) {
> +        ds_put_format(actions, "drop;");
> +    } else {
> +        ds_put_format(&common_actions, REG_ECMP_GROUP_ID" = 0; %s = ",
> +                      is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6);
> +        if (gateway) {
> +            ds_put_cstr(&common_actions, gateway);
> +        } else {
> +            ds_put_format(&common_actions, "ip%s.dst", is_ipv4 ? "4" : "6");
> +        }
> +        ds_put_format(&common_actions, "; "
> +                      "%s = %s; "
> +                      "eth.src = %s; "
> +                      "outport = %s; "
> +                      "flags.loopback = 1; "
> +                      "next;",
> +                      is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6,
> +                      lrp_addr_s,
> +                      op->lrp.networks.ea_s,
> +                      op->json_key);
> +        ds_put_format(actions, "ip.ttl--; %s", ds_cstr(&common_actions));
> +    }
> +
> +    ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_ROUTING, priority,
> +                            ds_cstr(match), ds_cstr(actions),
> +                            &op->pb->header_.uuid, lflow_uuid_idx);
> +    if (op && op->lrp.has_bfd) {
> +        ds_put_format(match, " && udp.dst == 3784");
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_IP_ROUTING,
> +                                priority + 1, ds_cstr(match),
> +                                ds_cstr(&common_actions),
> +                                &op->pb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    ds_destroy(&common_actions);
> +}
> +
> +static bool
> +is_ip4_in_router_network(struct local_lport *router_lport,
> +                         ovs_be32 ip)
> +{
> +    ovs_assert(!router_lport->ldp->is_switch);
> +
> +    for (size_t i = 0; i < router_lport->lrp.networks.n_ipv4_addrs; i++) {
> +        const struct ipv4_netaddr *na =
> +            &router_lport->lrp.networks.ipv4_addrs[i];
> +
> +        if (!((na->network ^ ip) & na->mask)) {
> +            /* There should be only 1 interface that matches the
> +             * supplied IP.  Otherwise, it's a configuration error,
> +             * because subnets of a router's interfaces should NOT
> +             * overlap. */
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
> +static bool
> +is_ip6_in_router_network(struct local_lport *router_lport,
> +                         struct in6_addr ip6)
> +{
> +    ovs_assert(!router_lport->ldp->is_switch);
> +
> +    for (size_t i = 0; i < router_lport->lrp.networks.n_ipv6_addrs; i++) {
> +        const struct ipv6_netaddr *na =
> +            &router_lport->lrp.networks.ipv6_addrs[i];
> +
> +        struct in6_addr xor_addr = ipv6_addr_bitxor(&na->network, &ip6);
> +        struct in6_addr and_addr = ipv6_addr_bitand(&xor_addr, &na->mask);
> +
> +        if (ipv6_is_zero(&and_addr)) {
> +            /* There should be only 1 interface that matches the
> +             * supplied IP.  Otherwise, it's a configuration error,
> +             * because subnets of a router's interfaces should NOT
> +             * overlap. */
> +            return true;
> +        }
> +    }
> +
> +    return false;
> +}
> +
> +static void build_lb_generic_lswitch_rules(
> +    struct hmap *lflows, struct ovn_controller_lb *lb,
> +    struct ds *match, struct ds *action, uint32_t *lflow_uuid_idx);
> +static void build_lb_vip_actions(struct ovn_lb_vip *, struct ds *action,
> +                                 char *selection_fields, bool ls_dp);
> +static void build_lb_generic_lrouter_flows(struct hmap *lflows,
> +                                           struct ovn_controller_lb *lb,
> +                                           struct ds *match,
> +                                           uint32_t *lflow_uuid_idx);
> +
> +static void ovn_ctrl_build_lb_lswitch_lflows(struct hmap *lswitch_lflows,
> +                                             struct ovn_controller_lb *ovn_lb)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds action = DS_EMPTY_INITIALIZER;
> +
> +    uint32_t lflow_uuid_idx = 1;
> +    build_lb_generic_lswitch_rules(lswitch_lflows, ovn_lb, &match, &action,
> +                                   &lflow_uuid_idx);
> +
> +    ds_destroy(&match);
> +    ds_destroy(&action);
> +}
> +
> +static void ovn_ctrl_build_lb_lrouter_lflows(struct hmap *lrouter_lflows,
> +                                             struct ovn_controller_lb *ovn_lb)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +
> +    uint32_t lflow_uuid_idx = 1;
> +    build_lb_generic_lrouter_flows(lrouter_lflows, ovn_lb, &match,
> +                                   &lflow_uuid_idx);
> +
> +    ds_destroy(&match);
> +}
> +
> +static void
> +build_lb_generic_lswitch_rules(struct hmap *lflows,
> +                               struct ovn_controller_lb *lb,
> +                               struct ds *match, struct ds *action,
> +                               uint32_t *lflow_uuid_idx)
> +{
> +    for (size_t i = 0; i < lb->n_vips; i++) {
> +        struct ovn_lb_vip *lb_vip = &lb->vips[i];
> +        /* If health check is enabled on this vip skip adding flows.
> +         * ovn-northd would take care of it. */
> +        char *vip_key = xasprintf("%s_hc", lb_vip->vip_str);
> +        if (smap_get_bool(&lb->slb->options, vip_key, false)) {
> +            free(vip_key);
> +            continue;
> +        }
> +        free(vip_key);
> +
> +        const char *ip_match = NULL;
> +
> +        ds_clear(action);
> +        ds_clear(match);
> +
> +        /* Store the original destination IP to be used when generating
> +         * hairpin flows.
> +         */
> +        if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> +            ip_match = "ip4";
> +            ds_put_format(action, REG_ORIG_DIP_IPV4 " = %s; ",
> +                          lb_vip->vip_str);
> +        } else {
> +            ip_match = "ip6";
> +            ds_put_format(action, REG_ORIG_DIP_IPV6 " = %s; ",
> +                          lb_vip->vip_str);
> +        }
> +
> +        const char *proto = NULL;
> +        if (lb_vip->vip_port) {
> +            proto = "tcp";
> +            if (lb->slb->protocol) {
> +                if (!strcmp(lb->slb->protocol, "udp")) {
> +                    proto = "udp";
> +                } else if (!strcmp(lb->slb->protocol, "sctp")) {
> +                    proto = "sctp";
> +                }
> +            }
> +
> +            /* Store the original destination port to be used when generating
> +             * hairpin flows.
> +             */
> +            ds_put_format(action, REG_ORIG_TP_DPORT " = %"PRIu16"; ",
> +                          lb_vip->vip_port);
> +        }
> +
> +        /* New connections in Ingress table. */
> +
> +        build_lb_vip_actions(lb_vip, action,
> +                             lb->selection_fields, true);
> +
> +        ds_put_format(match, "ct.new && %s.dst == %s", ip_match,
> +                      lb_vip->vip_str);
> +        if (lb_vip->vip_port) {
> +            ds_put_format(match, " && %s.dst == %d", proto, lb_vip->vip_port);
> +            ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_STATEFUL, 120,
> +                                    ds_cstr(match), ds_cstr(action),
> +                                    &lb->slb->header_.uuid, lflow_uuid_idx);
> +        } else {
> +            ovn_ctrl_lflow_add_uuid(lflows, S_SWITCH_IN_STATEFUL, 110,
> +                                    ds_cstr(match), ds_cstr(action),
> +                                    &lb->slb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +}
> +
> +static void
> +build_lb_vip_actions(struct ovn_lb_vip *lb_vip, struct ds *action,
> +                     char *selection_fields, bool ls_dp)
> +{
> +    bool skip_hash_fields = false, reject = false;
> +
> +    if (lb_vip->empty_backend_rej && !lb_vip->n_backends) {
> +        reject = true;
> +    } else {
> +        ds_put_format(action, "ct_lb(backends=%s);", lb_vip->backend_ips);
> +    }
> +
> +    if (reject) {
> +        int stage = ls_dp ? ovn_stage_get_table(S_SWITCH_OUT_QOS_MARK)
> +                          : ovn_stage_get_table(S_ROUTER_OUT_SNAT);
> +        ds_clear(action);
> +        ds_put_format(action, "reg0 = 0; reject { outport <-> inport; "
> +                      "next(pipeline=egress,table=%d);};", stage);
> +    } else if (!skip_hash_fields && selection_fields && selection_fields[0]) {
> +        ds_chomp(action, ';');
> +        ds_chomp(action, ')');
> +        ds_put_format(action, "; hash_fields=\"%s\");", selection_fields);
> +    }
> +}
> +
> +static void
> +build_lb_generic_lrouter_flows(struct hmap *lflows,
> +                               struct ovn_controller_lb *lb,
> +                               struct ds *match,
> +                               uint32_t *lflow_uuid_idx)
> +{
> +    /* A set to hold all ips that need defragmentation and tracking. */
> +    struct sset all_ips = SSET_INITIALIZER(&all_ips);
> +
> +    bool lb_skip_snat = smap_get_bool(&lb->slb->options, "skip_snat", false);
> +    if (lb_skip_snat) {
> +        ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_OUT_SNAT, 120,
> +                                "flags.skip_snat_for_lb == 1 && ip", "next;",
> +                                &lb->slb->header_.uuid, lflow_uuid_idx);
> +    }
> +
> +    for (size_t j = 0; j < lb->n_vips; j++) {
> +        struct ovn_lb_vip *lb_vip = &lb->vips[j];
> +        if (!sset_contains(&all_ips, lb_vip->vip_str)) {
> +            sset_add(&all_ips, lb_vip->vip_str);
> +            /* If there are any load balancing rules, we should send
> +                * the packet to conntrack for defragmentation and
> +                * tracking.  This helps with two things.
> +                *
> +                * 1. With tracking, we can send only new connections to
> +                *    pick a DNAT ip address from a group.
> +                * 2. If there are L4 ports in load balancing rules, we
> +                *    need the defragmentation to match on L4 ports. */
> +            ds_clear(match);
> +            if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> +                ds_put_format(match, "ip && ip4.dst == %s",
> +                                lb_vip->vip_str);
> +            } else {
> +                ds_put_format(match, "ip && ip6.dst == %s",
> +                                lb_vip->vip_str);
> +            }
> +
> +            ovn_ctrl_lflow_add_uuid(lflows, S_ROUTER_IN_DEFRAG,
> +                                    100, ds_cstr(match), "ct_next;",
> +                                    &lb->slb->header_.uuid, lflow_uuid_idx);
> +        }
> +    }
> +
> +    sset_destroy(&all_ips);
> +}
> diff --git a/lib/lflow.h b/lib/lflow.h
> new file mode 100644
> index 0000000000..0cec2794cf
> --- /dev/null
> +++ b/lib/lflow.h
> @@ -0,0 +1,333 @@
> +/*
> + * Copyright (c) 2021 Red Hat, Inc.
> + *
> + * Licensed under the Apache License, Version 2.0 (the "License");
> + * you may not use this file except in compliance with the License.
> + * You may obtain a copy of the License at:
> + *
> + *     http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing, software
> + * distributed under the License is distributed on an "AS IS" BASIS,
> + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> + * See the License for the specific language governing permissions and
> + * limitations under the License.
> + */
> +
> +#ifndef OVN_LIB_LFLOW_H
> +#define OVN_LIB_LFLOW_H 1
> +
> +#include "lib/util.h"
> +#include "openvswitch/hmap.h"
> +#include "openvswitch/uuid.h"
> +
> +struct sbrec_datapath_binding;
> +struct sbrec_port_binding;
> +struct hmap;
> +struct ofpbuf;
> +struct local_datapath;
> +struct local_lport;
> +struct ovn_controller_lb;
> +
> +/* Pipeline stages. */
> +
> +/* The two pipelines in an OVN logical flow table. */
> +enum ovn_pipeline {
> +    P_IN,                       /* Ingress pipeline. */
> +    P_OUT                       /* Egress pipeline. */
> +};
> +
> +/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> +enum ovn_datapath_type {
> +    DP_SWITCH,                  /* OVN logical switch. */
> +    DP_ROUTER                   /* OVN logical router. */
> +};
> +
> +/* Returns an "enum ovn_stage" built from the arguments.
> + *
> + * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> + * functions can't be used in enums or switch cases.) */
> +#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> +    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> +
> +/* A stage within an OVN logical switch or router.
> + *
> + * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> + * or router, whether the stage is part of the ingress or egress pipeline, and
> + * the table within that pipeline.  The first three components are combined to
> + * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> + * S_ROUTER_OUT_DELIVERY. */
> +enum ovn_stage {
> +#define PIPELINE_STAGES                                                   \
> +    /* Logical switch ingress stages. */                                  \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")   \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    1, "ls_in_port_sec_ip")   \
> +    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    2, "ls_in_port_sec_nd")   \
> +    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    3, "ls_in_lookup_fdb")    \
> +    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        4, "ls_in_put_fdb")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        5, "ls_in_pre_acl")       \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         6, "ls_in_pre_lb")        \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   7, "ls_in_pre_stateful")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       8, "ls_in_acl_hint")      \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL,            9, "ls_in_acl")           \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> +    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      12, "ls_in_stateful")      \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   13, "ls_in_pre_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   14, "ls_in_nat_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       15, "ls_in_hairpin")       \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    16, "ls_in_arp_rsp")       \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  17, "ls_in_dhcp_options")  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 18, "ls_in_dhcp_response") \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    19, "ls_in_dns_lookup")    \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  20, "ls_in_dns_response")  \
> +    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 21, "ls_in_external_port") \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       22, "ls_in_l2_lkup")       \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    23, "ls_in_l2_unknown")    \
> +                                                                          \
> +    /* Logical switch egress stages. */                                   \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")         \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      1, "ls_out_pre_acl")        \
> +    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> +    PIPELINE_STAGE(SWITCH, OUT, ACL,          4, "ls_out_acl")            \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     5, "ls_out_qos_mark")       \
> +    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    6, "ls_out_qos_meter")      \
> +    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     7, "ls_out_stateful")       \
> +    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP,  8, "ls_out_port_sec_ip")    \
> +    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2,  9, "ls_out_port_sec_l2")    \
> +                                                                      \
> +    /* Logical router ingress stages. */                              \
> +    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> +    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> +    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
> +    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> +    PIPELINE_STAGE(ROUTER, IN,  DNAT,            6, "lr_in_dnat")         \
> +    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7, "lr_in_ecmp_stateful") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8, "lr_in_nd_ra_options") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  9, "lr_in_nd_ra_response") \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      10, "lr_in_ip_routing")   \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 11, "lr_in_ip_routing_ecmp") \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          12, "lr_in_policy")       \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     13, "lr_in_policy_ecmp")  \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     14, "lr_in_arp_resolve")  \
> +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN   ,  15, "lr_in_chk_pkt_len")  \
> +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     16, "lr_in_larger_pkts")  \
> +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     17, "lr_in_gw_redirect")  \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     18, "lr_in_arp_request")  \
> +                                                                      \
> +    /* Logical router egress stages. */                               \
> +    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,    0, "lr_out_undnat")        \
> +    PIPELINE_STAGE(ROUTER, OUT, SNAT,      1, "lr_out_snat")          \
> +    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,  2, "lr_out_egr_loop")      \
> +    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,  3, "lr_out_delivery")
> +
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> +    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> +        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +};
> +
> +
> +/* Due to various hard-coded priorities need to implement ACLs, the
> + * northbound database supports a smaller range of ACL priorities than
> + * are available to logical flows.  This value is added to an ACL
> + * priority to determine the ACL's logical flow priority. */
> +#define OVN_ACL_PRI_OFFSET 1000
> +
> +/* Register definitions specific to switches. */
> +#define REGBIT_CONNTRACK_DEFRAG   "reg0[0]"
> +#define REGBIT_CONNTRACK_COMMIT   "reg0[1]"
> +#define REGBIT_CONNTRACK_NAT      "reg0[2]"
> +#define REGBIT_DHCP_OPTS_RESULT   "reg0[3]"
> +#define REGBIT_DNS_LOOKUP_RESULT  "reg0[4]"
> +#define REGBIT_ND_RA_OPTS_RESULT  "reg0[5]"
> +#define REGBIT_HAIRPIN            "reg0[6]"
> +#define REGBIT_ACL_HINT_ALLOW_NEW "reg0[7]"
> +#define REGBIT_ACL_HINT_ALLOW     "reg0[8]"
> +#define REGBIT_ACL_HINT_DROP      "reg0[9]"
> +#define REGBIT_ACL_HINT_BLOCK     "reg0[10]"
> +#define REGBIT_LKUP_FDB           "reg0[11]"
> +#define REGBIT_HAIRPIN_REPLY      "reg0[12]"
> +
> +#define REG_ORIG_DIP_IPV4         "reg1"
> +#define REG_ORIG_DIP_IPV6         "xxreg1"
> +#define REG_ORIG_TP_DPORT         "reg2[0..15]"
> +
> +/* Register definitions for switches and routers. */
> +
> +/* Indicate that this packet has been recirculated using egress
> + * loopback.  This allows certain checks to be bypassed, such as a
> + * logical router dropping packets with source IP address equals
> + * one of the logical router's own IP addresses. */
> +#define REGBIT_EGRESS_LOOPBACK  "reg9[0]"
> +/* Register to store the result of check_pkt_larger action. */
> +#define REGBIT_PKT_LARGER        "reg9[1]"
> +#define REGBIT_LOOKUP_NEIGHBOR_RESULT "reg9[2]"
> +#define REGBIT_LOOKUP_NEIGHBOR_IP_RESULT "reg9[3]"
> +
> +/* Register to store the eth address associated to a router port for packets
> + * received in S_ROUTER_IN_ADMISSION.
> + */
> +#define REG_INPORT_ETH_ADDR "xreg0[0..47]"
> +
> +/* Register for ECMP bucket selection. */
> +#define REG_ECMP_GROUP_ID       "reg8[0..15]"
> +#define REG_ECMP_MEMBER_ID      "reg8[16..31]"
> +
> +/* Registers used for routing. */
> +#define REG_NEXT_HOP_IPV4 "reg0"
> +#define REG_NEXT_HOP_IPV6 "xxreg0"
> +#define REG_SRC_IPV4 "reg1"
> +#define REG_SRC_IPV6 "xxreg1"
> +
> +#define FLAGBIT_NOT_VXLAN "flags[1] == 0"
> +
> +/*
> + * OVS register usage:
> + *
> + * Logical Switch pipeline:
> + * +---------+----------------------------------------------+
> + * | R0      |     REGBIT_{CONNTRACK/DHCP/DNS/HAIRPIN}      |
> + * |         | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |
> + * +---------+----------------------------------------------+
> + * | R1 - R9 |                   UNUSED                     |
> + * +---------+----------------------------------------------+
> + *
> + * Logical Router pipeline:
> + * +-----+--------------------------+---+-----------------+---+---------------+
> + * | R0  | REGBIT_ND_RA_OPTS_RESULT |   |                 |   |               |
> + * |     |   (= IN_ND_RA_OPTIONS)   | X |                 |   |               |
> + * |     |      NEXT_HOP_IPV4       | R |                 |   |               |
> + * |     |      (>= IP_INPUT)       | E | INPORT_ETH_ADDR | X |               |
> + * +-----+--------------------------+ G |   (< IP_INPUT)  | X |               |
> + * | R1  |   SRC_IPV4 for ARP-REQ   | 0 |                 | R |               |
> + * |     |      (>= IP_INPUT)       |   |                 | E | NEXT_HOP_IPV6 |
> + * +-----+--------------------------+---+-----------------+ G | (>= IP_INPUT) |
> + * | R2  |        UNUSED            | X |                 | 0 |               |
> + * |     |                          | R |                 |   |               |
> + * +-----+--------------------------+ E |     UNUSED      |   |               |
> + * | R3  |        UNUSED            | G |                 |   |               |
> + * |     |                          | 1 |                 |   |               |
> + * +-----+--------------------------+---+-----------------+---+---------------+
> + * | R4  |        UNUSED            | X |                 |   |               |
> + * |     |                          | R |                 |   |               |
> + * +-----+--------------------------+ E |     UNUSED      | X |               |
> + * | R5  |        UNUSED            | G |                 | X |               |
> + * |     |                          | 2 |                 | R |SRC_IPV6 for NS|
> + * +-----+--------------------------+---+-----------------+ E | (>= IP_INPUT) |
> + * | R6  |        UNUSED            | X |                 | G |               |
> + * |     |                          | R |                 | 1 |               |
> + * +-----+--------------------------+ E |     UNUSED      |   |               |
> + * | R7  |        UNUSED            | G |                 |   |               |
> + * |     |                          | 3 |                 |   |               |
> + * +-----+--------------------------+---+-----------------+---+---------------+
> + * | R8  |     ECMP_GROUP_ID        |   |                 |
> + * |     |     ECMP_MEMBER_ID       | X |                 |
> + * +-----+--------------------------+ R |                 |
> + * |     | REGBIT_{                 | E |                 |
> + * |     |   EGRESS_LOOPBACK/       | G |     UNUSED      |
> + * | R9  |   PKT_LARGER/            | 4 |                 |
> + * |     |   LOOKUP_NEIGHBOR_RESULT/|   |                 |
> + * |     |   SKIP_LOOKUP_NEIGHBOR}  |   |                 |
> + * +-----+--------------------------+---+-----------------+
> + *
> + */
> +
> +/* Returns an "enum ovn_stage" built from the arguments. */
> +static inline enum ovn_stage
> +ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> +                uint8_t table)
> +{
> +    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> +}
> +
> +/* Returns the pipeline to which 'stage' belongs. */
> +static inline enum ovn_pipeline
> +ovn_stage_get_pipeline(enum ovn_stage stage)
> +{
> +    return (stage >> 8) & 1;
> +}
> +
> +/* Returns the pipeline name to which 'stage' belongs. */
> +static inline const char *
> +ovn_stage_get_pipeline_name(enum ovn_stage stage)
> +{
> +    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> +}
> +
> +/* Returns the table to which 'stage' belongs. */
> +static inline uint8_t
> +ovn_stage_get_table(enum ovn_stage stage)
> +{
> +    return stage & 0xff;
> +}
> +
> +/* Returns a string name for 'stage'. */
> +static inline const char *
> +ovn_stage_to_str(enum ovn_stage stage)
> +{
> +    switch (stage) {
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> +        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +        default: return "<unknown>";
> +    }
> +}
> +
> +/* Returns the type of the datapath to which a flow with the given 'stage' may
> + * be added. */
> +static inline enum ovn_datapath_type
> +ovn_stage_to_datapath_type(enum ovn_stage stage)
> +{
> +    switch (stage) {
> +#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> +        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return DP_##DP_TYPE;
> +    PIPELINE_STAGES
> +#undef PIPELINE_STAGE
> +    default: OVS_NOT_REACHED();
> +    }
> +}
> +
> +#define MC_FLOOD "_MC_flood"
> +#define MC_MROUTER_FLOOD "_MC_mrouter_flood"
> +#define MC_MROUTER_STATIC "_MC_mrouter_static"
> +#define MC_STATIC "_MC_static"
> +#define MC_UNKNOWN "_MC_unknown"
> +#define MC_FLOOD_L2 "_MC_flood_l2"
> +
> +struct ovn_ctrl_lflow {
> +    struct hmap_node hmap_node;
> +    struct uuid uuid_;
> +
> +    uint32_t dp_key; /* Datapath tunnel key. */
> +    enum ovn_stage stage;
> +    uint16_t priority;
> +    char *match;
> +    char *actions;
> +    char *stage_hint;
> +    const char *where;
> +};
> +
> +size_t ovn_ctrl_lflow_hash(const struct ovn_ctrl_lflow *);
> +void build_lswitch_generic_lflows(struct hmap *lflows);
> +void build_lrouter_generic_lflows(struct hmap *lflows);
> +
> +void ovn_ctrl_lflows_build_dp_lflows(
> +    struct hmap *lflows, struct local_datapath *);
> +
> +void ovn_ctrl_lflows_clear(struct hmap *lflows);
> +void ovn_ctrl_lflows_destroy(struct hmap *lflows);
> +
> +void ovn_ctrl_build_lport_lflows(
> +    struct hmap *lflows, struct local_lport *);
> +
> +void ovn_ctrl_build_lb_lflows(struct hmap *lswitch_lflows,
> +                              struct hmap *lrouter_lflows,
> +                              struct ovn_controller_lb *);
> +
> +#endif /* OVN_LIB_LFLOW_H */
> diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> index c5af8d1ab3..8e34a5c362 100644
> --- a/lib/ovn-util.c
> +++ b/lib/ovn-util.c
> @@ -324,6 +324,14 @@ extract_lrp_networks__(char *mac, char **networks, size_t n_networks,
>       return true;
>   }
>   
> +/* Appends the IPv6 address - 'addr' to already allocated 'laddrs'. */
> +void
> +lport_addr_add_ip6ddr(struct lport_addresses *laddrs, struct in6_addr addr,
> +                      unsigned int plen)
> +{
> +    add_ipv6_netaddr(laddrs, addr, plen);
> +}
> +
>   bool
>   extract_sbrec_binding_first_mac(const struct sbrec_port_binding *binding,
>                                   struct eth_addr *ea)
> @@ -367,6 +375,10 @@ destroy_lport_addresses(struct lport_addresses *laddrs)
>   {
>       free(laddrs->ipv4_addrs);
>       free(laddrs->ipv6_addrs);
> +    laddrs->ipv4_addrs = NULL;
> +    laddrs->ipv6_addrs = NULL;
> +    laddrs->n_ipv4_addrs = 0;
> +    laddrs->n_ipv6_addrs = 0;
>   }
>   
>   /* Go through 'addresses' and add found IPv4 addresses to 'ipv4_addrs' and
> @@ -785,3 +797,74 @@ ddlog_err(const char *msg)
>       VLOG_ERR("%s", msg);
>   }
>   #endif
> +
> +const struct sbrec_port_binding *
> +lport_lookup_by_name(struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +                     const char *name)
> +{
> +    struct sbrec_port_binding *pb = sbrec_port_binding_index_init_row(
> +        sbrec_port_binding_by_name);
> +    sbrec_port_binding_index_set_logical_port(pb, name);
> +
> +    const struct sbrec_port_binding *retval = sbrec_port_binding_index_find(
> +        sbrec_port_binding_by_name, pb);
> +
> +    sbrec_port_binding_index_destroy_row(pb);
> +
> +    return retval;
> +}
> +
> +const struct sbrec_port_binding *
> +lport_get_peer(const struct sbrec_port_binding *pb,
> +               struct ovsdb_idl_index *sbrec_port_binding_by_name)
> +{
> +    const char *peer_name = smap_get(&pb->options, "peer");
> +
> +    if (!peer_name) {
> +        return NULL;
> +    }
> +
> +    const struct sbrec_port_binding *peer;
> +    peer = lport_lookup_by_name(sbrec_port_binding_by_name,
> +                                peer_name);
> +    return (peer && peer->datapath) ? peer : NULL;
> +}
> +
> +static bool
> +is_lport_vif(const struct sbrec_port_binding *pb)
> +{
> +    return !pb->type[0];
> +}
> +
> +enum en_lport_type
> +get_lport_type(const struct sbrec_port_binding *pb)
> +{
> +    if (is_lport_vif(pb)) {
> +        if (pb->parent_port && pb->parent_port[0]) {
> +            return LP_CONTAINER;
> +        }
> +        return LP_VIF;
> +    } else if (!strcmp(pb->type, "patch")) {
> +        return LP_PATCH;
> +    } else if (!strcmp(pb->type, "chassisredirect")) {
> +        return LP_CHASSISREDIRECT;
> +    } else if (!strcmp(pb->type, "l3gateway")) {
> +        return LP_L3GATEWAY;
> +    } else if (!strcmp(pb->type, "localnet")) {
> +        return LP_LOCALNET;
> +    } else if (!strcmp(pb->type, "localport")) {
> +        return LP_LOCALPORT;
> +    } else if (!strcmp(pb->type, "l2gateway")) {
> +        return LP_L2GATEWAY;
> +    } else if (!strcmp(pb->type, "virtual")) {
> +        return LP_VIRTUAL;
> +    } else if (!strcmp(pb->type, "external")) {
> +        return LP_EXTERNAL;
> +    } else if (!strcmp(pb->type, "remote")) {
> +        return LP_REMOTE;
> +    } else if (!strcmp(pb->type, "vtep")) {
> +        return LP_VTEP;
> +    }
> +
> +    return LP_UNKNOWN;
> +}
> diff --git a/lib/ovn-util.h b/lib/ovn-util.h
> index 9935cad34c..ea07097573 100644
> --- a/lib/ovn-util.h
> +++ b/lib/ovn-util.h
> @@ -80,6 +80,8 @@ bool extract_sbrec_binding_first_mac(const struct sbrec_port_binding *binding,
>   
>   bool extract_lrp_networks__(char *mac, char **networks, size_t n_networks,
>                               struct lport_addresses *laddrs);
> +void lport_addr_add_ip6ddr(struct lport_addresses *laddrs,
> +                           struct in6_addr addr, unsigned int plen);
>   
>   bool lport_addresses_is_empty(struct lport_addresses *);
>   void destroy_lport_addresses(struct lport_addresses *);
> @@ -279,4 +281,34 @@ void ddlog_warn(const char *msg);
>   void ddlog_err(const char *msg);
>   #endif
>   
> +struct sbrec_port_binding;
> +struct ovsdb_idl_index;
> +
> +const struct sbrec_port_binding *lport_lookup_by_name(
> +    struct ovsdb_idl_index *sbrec_port_binding_by_name,
> +    const char *name);
> +
> +const struct sbrec_port_binding *lport_get_peer(
> +    const struct sbrec_port_binding *pb,
> +    struct ovsdb_idl_index *sbrec_port_binding_by_name);
> +
> +/* Corresponds to each Port_Binding.type. */
> +enum en_lport_type {
> +    LP_UNKNOWN,
> +    LP_VIF,
> +    LP_CONTAINER,
> +    LP_PATCH,
> +    LP_L3GATEWAY,
> +    LP_LOCALNET,
> +    LP_LOCALPORT,
> +    LP_L2GATEWAY,
> +    LP_VTEP,
> +    LP_CHASSISREDIRECT,
> +    LP_VIRTUAL,
> +    LP_EXTERNAL,
> +    LP_REMOTE
> +};
> +
> +enum en_lport_type get_lport_type(const struct sbrec_port_binding *);
> +
>   #endif
> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> index df42c1824b..b1c1fcbce5 100644
> --- a/northd/ovn-northd.c
> +++ b/northd/ovn-northd.c
> @@ -38,6 +38,7 @@
>   #include "lib/ovn-sb-idl.h"
>   #include "lib/ovn-util.h"
>   #include "lib/lb.h"
> +#include "lib/lflow.h"
>   #include "memory.h"
>   #include "lib/ovn-parallel-hmap.h"
>   #include "ovn/actions.h"
> @@ -113,286 +114,7 @@ static const char *ssl_certificate_file;
>   static const char *ssl_ca_cert_file;
>   
>   #define MAX_OVN_TAGS 4096
> -

> -/* Pipeline stages. */
> -
> -/* The two pipelines in an OVN logical flow table. */
> -enum ovn_pipeline {
> -    P_IN,                       /* Ingress pipeline. */
> -    P_OUT                       /* Egress pipeline. */
> -};
> -
> -/* The two purposes for which ovn-northd uses OVN logical datapaths. */
> -enum ovn_datapath_type {
> -    DP_SWITCH,                  /* OVN logical switch. */
> -    DP_ROUTER                   /* OVN logical router. */
> -};
> -
> -/* Returns an "enum ovn_stage" built from the arguments.
> - *
> - * (It's better to use ovn_stage_build() for type-safety reasons, but inline
> - * functions can't be used in enums or switch cases.) */
> -#define OVN_STAGE_BUILD(DP_TYPE, PIPELINE, TABLE) \
> -    (((DP_TYPE) << 9) | ((PIPELINE) << 8) | (TABLE))
> -
> -/* A stage within an OVN logical switch or router.
> - *
> - * An "enum ovn_stage" indicates whether the stage is part of a logical switch
> - * or router, whether the stage is part of the ingress or egress pipeline, and
> - * the table within that pipeline.  The first three components are combined to
> - * form the stage's full name, e.g. S_SWITCH_IN_PORT_SEC_L2,
> - * S_ROUTER_OUT_DELIVERY. */
> -enum ovn_stage {
> -#define PIPELINE_STAGES                                                   \
> -    /* Logical switch ingress stages. */                                  \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_L2,    0, "ls_in_port_sec_l2")   \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_IP,    1, "ls_in_port_sec_ip")   \
> -    PIPELINE_STAGE(SWITCH, IN,  PORT_SEC_ND,    2, "ls_in_port_sec_nd")   \
> -    PIPELINE_STAGE(SWITCH, IN,  LOOKUP_FDB ,    3, "ls_in_lookup_fdb")    \
> -    PIPELINE_STAGE(SWITCH, IN,  PUT_FDB,        4, "ls_in_put_fdb")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_ACL,        5, "ls_in_pre_acl")       \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_LB,         6, "ls_in_pre_lb")        \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_STATEFUL,   7, "ls_in_pre_stateful")  \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_HINT,       8, "ls_in_acl_hint")      \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL,            9, "ls_in_acl")           \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,      10, "ls_in_qos_mark")      \
> -    PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     11, "ls_in_qos_meter")     \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      12, "ls_in_stateful")      \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   13, "ls_in_pre_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   14, "ls_in_nat_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       15, "ls_in_hairpin")       \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    16, "ls_in_arp_rsp")       \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  17, "ls_in_dhcp_options")  \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 18, "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    19, "ls_in_dns_lookup")    \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  20, "ls_in_dns_response")  \
> -    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 21, "ls_in_external_port") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       22, "ls_in_l2_lkup")       \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    23, "ls_in_l2_unknown")    \
> -                                                                          \
> -    /* Logical switch egress stages. */                                   \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")         \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_ACL,      1, "ls_out_pre_acl")        \
> -    PIPELINE_STAGE(SWITCH, OUT, PRE_STATEFUL, 2, "ls_out_pre_stateful")   \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL_HINT,     3, "ls_out_acl_hint")       \
> -    PIPELINE_STAGE(SWITCH, OUT, ACL,          4, "ls_out_acl")            \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_MARK,     5, "ls_out_qos_mark")       \
> -    PIPELINE_STAGE(SWITCH, OUT, QOS_METER,    6, "ls_out_qos_meter")      \
> -    PIPELINE_STAGE(SWITCH, OUT, STATEFUL,     7, "ls_out_stateful")       \
> -    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_IP,  8, "ls_out_port_sec_ip")    \
> -    PIPELINE_STAGE(SWITCH, OUT, PORT_SEC_L2,  9, "ls_out_port_sec_l2")    \
> -                                                                      \
> -    /* Logical router ingress stages. */                              \
> -    PIPELINE_STAGE(ROUTER, IN,  ADMISSION,       0, "lr_in_admission")    \
> -    PIPELINE_STAGE(ROUTER, IN,  LOOKUP_NEIGHBOR, 1, "lr_in_lookup_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  LEARN_NEIGHBOR,  2, "lr_in_learn_neighbor") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
> -    PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          4, "lr_in_defrag")       \
> -    PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          5, "lr_in_unsnat")       \
> -    PIPELINE_STAGE(ROUTER, IN,  DNAT,            6, "lr_in_dnat")         \
> -    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7, "lr_in_ecmp_stateful") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8, "lr_in_nd_ra_options") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  9, "lr_in_nd_ra_response") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      10, "lr_in_ip_routing")   \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 11, "lr_in_ip_routing_ecmp") \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          12, "lr_in_policy")       \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     13, "lr_in_policy_ecmp")  \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     14, "lr_in_arp_resolve")  \
> -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN   ,  15, "lr_in_chk_pkt_len")  \
> -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     16, "lr_in_larger_pkts")  \
> -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     17, "lr_in_gw_redirect")  \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     18, "lr_in_arp_request")  \
> -                                                                      \
> -    /* Logical router egress stages. */                               \
> -    PIPELINE_STAGE(ROUTER, OUT, UNDNAT,    0, "lr_out_undnat")        \
> -    PIPELINE_STAGE(ROUTER, OUT, SNAT,      1, "lr_out_snat")          \
> -    PIPELINE_STAGE(ROUTER, OUT, EGR_LOOP,  2, "lr_out_egr_loop")      \
> -    PIPELINE_STAGE(ROUTER, OUT, DELIVERY,  3, "lr_out_delivery")
> -
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)   \
> -    S_##DP_TYPE##_##PIPELINE##_##STAGE                          \
> -        = OVN_STAGE_BUILD(DP_##DP_TYPE, P_##PIPELINE, TABLE),
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -};
> -
> -/* Due to various hard-coded priorities need to implement ACLs, the
> - * northbound database supports a smaller range of ACL priorities than
> - * are available to logical flows.  This value is added to an ACL
> - * priority to determine the ACL's logical flow priority. */
> -#define OVN_ACL_PRI_OFFSET 1000
> -
> -/* Register definitions specific to switches. */
> -#define REGBIT_CONNTRACK_DEFRAG   "reg0[0]"
> -#define REGBIT_CONNTRACK_COMMIT   "reg0[1]"
> -#define REGBIT_CONNTRACK_NAT      "reg0[2]"
> -#define REGBIT_DHCP_OPTS_RESULT   "reg0[3]"
> -#define REGBIT_DNS_LOOKUP_RESULT  "reg0[4]"
> -#define REGBIT_ND_RA_OPTS_RESULT  "reg0[5]"
> -#define REGBIT_HAIRPIN            "reg0[6]"
> -#define REGBIT_ACL_HINT_ALLOW_NEW "reg0[7]"
> -#define REGBIT_ACL_HINT_ALLOW     "reg0[8]"
> -#define REGBIT_ACL_HINT_DROP      "reg0[9]"
> -#define REGBIT_ACL_HINT_BLOCK     "reg0[10]"
> -#define REGBIT_LKUP_FDB           "reg0[11]"
> -#define REGBIT_HAIRPIN_REPLY      "reg0[12]"
> -
> -#define REG_ORIG_DIP_IPV4         "reg1"
> -#define REG_ORIG_DIP_IPV6         "xxreg1"
> -#define REG_ORIG_TP_DPORT         "reg2[0..15]"
> -
> -/* Register definitions for switches and routers. */
> -
> -/* Indicate that this packet has been recirculated using egress
> - * loopback.  This allows certain checks to be bypassed, such as a
> - * logical router dropping packets with source IP address equals
> - * one of the logical router's own IP addresses. */
> -#define REGBIT_EGRESS_LOOPBACK  "reg9[0]"
> -/* Register to store the result of check_pkt_larger action. */
> -#define REGBIT_PKT_LARGER        "reg9[1]"
> -#define REGBIT_LOOKUP_NEIGHBOR_RESULT "reg9[2]"
> -#define REGBIT_LOOKUP_NEIGHBOR_IP_RESULT "reg9[3]"
> -
> -/* Register to store the eth address associated to a router port for packets
> - * received in S_ROUTER_IN_ADMISSION.
> - */
> -#define REG_INPORT_ETH_ADDR "xreg0[0..47]"
> -
> -/* Register for ECMP bucket selection. */
> -#define REG_ECMP_GROUP_ID       "reg8[0..15]"
> -#define REG_ECMP_MEMBER_ID      "reg8[16..31]"
> -
> -/* Registers used for routing. */
> -#define REG_NEXT_HOP_IPV4 "reg0"
> -#define REG_NEXT_HOP_IPV6 "xxreg0"
> -#define REG_SRC_IPV4 "reg1"
> -#define REG_SRC_IPV6 "xxreg1"
> -
> -#define FLAGBIT_NOT_VXLAN "flags[1] == 0"
> -
> -/*
> - * OVS register usage:
> - *
> - * Logical Switch pipeline:
> - * +----+----------------------------------------------+---+------------------+
> - * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |   |                  |
> - * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           | X |                  |
> - * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} | X |                  |
> - * +----+----------------------------------------------+ X |                  |
> - * | R1 |         ORIG_DIP_IPV4 (>= IN_STATEFUL)       | R |                  |
> - * +----+----------------------------------------------+ E |                  |
> - * | R2 |         ORIG_TP_DPORT (>= IN_STATEFUL)       | G |                  |
> - * +----+----------------------------------------------+ 0 |                  |
> - * | R3 |                   UNUSED                     |   |                  |
> - * +----+----------------------------------------------+---+------------------+
> - * | R4 |                   UNUSED                     |   |                  |
> - * +----+----------------------------------------------+ X |   ORIG_DIP_IPV6  |
> - * | R5 |                   UNUSED                     | X | (>= IN_STATEFUL) |
> - * +----+----------------------------------------------+ R |                  |
> - * | R6 |                   UNUSED                     | E |                  |
> - * +----+----------------------------------------------+ G |                  |
> - * | R7 |                   UNUSED                     | 1 |                  |
> - * +----+----------------------------------------------+---+------------------+
> - * | R8 |                   UNUSED                     |
> - * +----+----------------------------------------------+
> - * | R9 |                   UNUSED                     |
> - * +----+----------------------------------------------+
> - *
> - * Logical Router pipeline:
> - * +-----+--------------------------+---+-----------------+---+---------------+
> - * | R0  | REGBIT_ND_RA_OPTS_RESULT |   |                 |   |               |
> - * |     |   (= IN_ND_RA_OPTIONS)   | X |                 |   |               |
> - * |     |      NEXT_HOP_IPV4       | R |                 |   |               |
> - * |     |      (>= IP_INPUT)       | E | INPORT_ETH_ADDR | X |               |
> - * +-----+--------------------------+ G |   (< IP_INPUT)  | X |               |
> - * | R1  |   SRC_IPV4 for ARP-REQ   | 0 |                 | R |               |
> - * |     |      (>= IP_INPUT)       |   |                 | E | NEXT_HOP_IPV6 |
> - * +-----+--------------------------+---+-----------------+ G | (>= IP_INPUT) |
> - * | R2  |        UNUSED            | X |                 | 0 |               |
> - * |     |                          | R |                 |   |               |
> - * +-----+--------------------------+ E |     UNUSED      |   |               |
> - * | R3  |        UNUSED            | G |                 |   |               |
> - * |     |                          | 1 |                 |   |               |
> - * +-----+--------------------------+---+-----------------+---+---------------+
> - * | R4  |        UNUSED            | X |                 |   |               |
> - * |     |                          | R |                 |   |               |
> - * +-----+--------------------------+ E |     UNUSED      | X |               |
> - * | R5  |        UNUSED            | G |                 | X |               |
> - * |     |                          | 2 |                 | R |SRC_IPV6 for NS|
> - * +-----+--------------------------+---+-----------------+ E | (>= IP_INPUT) |
> - * | R6  |        UNUSED            | X |                 | G |               |
> - * |     |                          | R |                 | 1 |               |
> - * +-----+--------------------------+ E |     UNUSED      |   |               |
> - * | R7  |        UNUSED            | G |                 |   |               |
> - * |     |                          | 3 |                 |   |               |
> - * +-----+--------------------------+---+-----------------+---+---------------+
> - * | R8  |     ECMP_GROUP_ID        |   |                 |
> - * |     |     ECMP_MEMBER_ID       | X |                 |
> - * +-----+--------------------------+ R |                 |
> - * |     | REGBIT_{                 | E |                 |
> - * |     |   EGRESS_LOOPBACK/       | G |     UNUSED      |
> - * | R9  |   PKT_LARGER/            | 4 |                 |
> - * |     |   LOOKUP_NEIGHBOR_RESULT/|   |                 |
> - * |     |   SKIP_LOOKUP_NEIGHBOR}  |   |                 |
> - * +-----+--------------------------+---+-----------------+
> - *
> - */
> -
> -/* Returns an "enum ovn_stage" built from the arguments. */
> -static enum ovn_stage
> -ovn_stage_build(enum ovn_datapath_type dp_type, enum ovn_pipeline pipeline,
> -                uint8_t table)
> -{
> -    return OVN_STAGE_BUILD(dp_type, pipeline, table);
> -}
> -
> -/* Returns the pipeline to which 'stage' belongs. */
> -static enum ovn_pipeline
> -ovn_stage_get_pipeline(enum ovn_stage stage)
> -{
> -    return (stage >> 8) & 1;
> -}
> -
> -/* Returns the pipeline name to which 'stage' belongs. */
> -static const char *
> -ovn_stage_get_pipeline_name(enum ovn_stage stage)
> -{
> -    return ovn_stage_get_pipeline(stage) == P_IN ? "ingress" : "egress";
> -}
> -
> -/* Returns the table to which 'stage' belongs. */
> -static uint8_t
> -ovn_stage_get_table(enum ovn_stage stage)
> -{
> -    return stage & 0xff;
> -}
> -
> -/* Returns a string name for 'stage'. */
> -static const char *
> -ovn_stage_to_str(enum ovn_stage stage)
> -{
> -    switch (stage) {
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> -        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return NAME;
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -        default: return "<unknown>";
> -    }
> -}
>   
> -/* Returns the type of the datapath to which a flow with the given 'stage' may
> - * be added. */
> -static enum ovn_datapath_type
> -ovn_stage_to_datapath_type(enum ovn_stage stage)
> -{
> -    switch (stage) {
> -#define PIPELINE_STAGE(DP_TYPE, PIPELINE, STAGE, TABLE, NAME)       \
> -        case S_##DP_TYPE##_##PIPELINE##_##STAGE: return DP_##DP_TYPE;
> -    PIPELINE_STAGES
> -#undef PIPELINE_STAGE
> -    default: OVS_NOT_REACHED();
> -    }
> -}
>   

>   static void
>   usage(void)
> @@ -674,6 +396,13 @@ struct ovn_datapath {
>   
>       /* Port groups related to the datapath, used only when nbs is NOT NULL. */
>       struct hmap nb_pgs;
> +
> +    /* Applicable for logical switch datapaths. */
> +    bool vlan_passthru;
> +    bool has_dns_records;
> +
> +    /* Applicable for logical router datapaths. */
> +    bool always_learn_from_arp_request;
>   };
>   
>   /* Contains a NAT entry with the external addresses pre-parsed. */
> @@ -697,6 +426,7 @@ struct ovn_snat_ip {
>   static bool
>   get_force_snat_ip(struct ovn_datapath *od, const char *key_type,
>                     struct lport_addresses *laddrs);
> +static bool ls_has_dns_records(const struct nbrec_logical_switch *nbs);
>   
>   /* Returns true if a 'nat_entry' is valid, i.e.:
>    * - parsing was successful.
> @@ -1212,6 +942,9 @@ join_datapaths(struct northd_context *ctx, struct hmap *datapaths,
>               ovs_list_push_back(nb_only, &od->list);
>           }
>   
> +        od->vlan_passthru = smap_get_bool(&nbs->other_config,
> +                                          "vlan-passthru", false);
> +        od->has_dns_records = ls_has_dns_records(od->nbs);
>           init_ipam_info_for_datapath(od);
>           init_mcast_info_for_datapath(od);
>           init_lb_ips(od);
> @@ -1244,6 +977,11 @@ join_datapaths(struct northd_context *ctx, struct hmap *datapaths,
>                                        NULL, nbr, NULL);
>               ovs_list_push_back(nb_only, &od->list);
>           }
> +
> +        od->always_learn_from_arp_request =
> +            smap_get_bool(&nbr->options,
> +                          "always_learn_from_arp_request", true);
> +
>           init_mcast_info_for_datapath(od);
>           init_nat_entries(od);
>           init_lb_ips(od);
> @@ -1544,14 +1282,6 @@ lsp_is_enabled(const struct nbrec_logical_switch_port *lsp)
>       return !lsp->n_enabled || *lsp->enabled;
>   }
>   
> -/* Returns true only if the logical switch port 'up' column is set to true.
> - * Otherwise, if the column is not set or set to false, returns false. */
> -static bool
> -lsp_is_up(const struct nbrec_logical_switch_port *lsp)
> -{
> -    return lsp->n_up && *lsp->up;
> -}
> -
>   static bool
>   lsp_is_external(const struct nbrec_logical_switch_port *nbsp)
>   {
> @@ -2833,6 +2563,8 @@ op_get_name(const struct ovn_port *op)
>       return name;
>   }
>   
> +static void sync_ND_RA_options_for_lrouter_port(const struct ovn_port *op);
> +
>   static void
>   ovn_update_ipv6_prefix(struct hmap *ports)
>   {
> @@ -2842,6 +2574,8 @@ ovn_update_ipv6_prefix(struct hmap *ports)
>               continue;
>           }
>   
> +        sync_ND_RA_options_for_lrouter_port(op);
> +
>           if (!smap_get_bool(&op->nbrp->options, "prefix", false)) {
>               continue;
>           }
> @@ -2930,10 +2664,20 @@ ovn_port_update_sbrec(struct northd_context *ctx,
>           } else {
>               if (op->peer) {
>                   smap_add(&new, "peer", op->peer->key);
> +                if (op->peer->od->n_localnet_ports) {
> +                    smap_add(&new, "peer-dp-has-localnet-ports", "true");
> +                }
> +            }
> +            if (op->od->l3dgw_port == op) {
> +                smap_add(&new, "is-l3dgw-port", "true");
>               }
>               if (chassis_name) {
>                   smap_add(&new, "l3gateway-chassis", chassis_name);
>               }
> +
> +            if (op->has_bfd) {
> +                smap_add(&new, "has-bfd", "true");
> +            }
>           }
>   
>           const char *ipv6_pd_list = smap_get(&op->sb->options,
> @@ -3139,9 +2883,17 @@ ovn_port_update_sbrec(struct northd_context *ctx,
>   
>           sbrec_port_binding_set_parent_port(op->sb, op->nbsp->parent_name);
>           sbrec_port_binding_set_tag(op->sb, op->nbsp->tag, op->nbsp->n_tag);
> -        sbrec_port_binding_set_mac(op->sb, (const char **) op->nbsp->addresses,
> -                                   op->nbsp->n_addresses);
> -
> +        if (op->nbsp->dynamic_addresses) {
> +            sbrec_port_binding_set_mac(
> +                op->sb, (const char **)&op->nbsp->dynamic_addresses, 1);
> +        } else {
> +            sbrec_port_binding_set_mac(op->sb,
> +                                       (const char **) op->nbsp->addresses,
> +                                       op->nbsp->n_addresses);
> +        }
> +        sbrec_port_binding_set_port_security(
> +            op->sb, (const char **) op->nbsp->port_security,
> +            op->nbsp->n_port_security);
>           struct smap ids = SMAP_INITIALIZER(&ids);
>           smap_clone(&ids, &op->nbsp->external_ids);
>           const char *name = smap_get(&ids, "neutron:port_name");
> @@ -3400,16 +3152,22 @@ build_ovn_lbs(struct northd_context *ctx, struct hmap *datapaths,
>   
>       struct ovn_datapath *od;
>       HMAP_FOR_EACH (od, key_node, datapaths) {
> -        if (!od->nbs) {
> -            continue;
> -        }
> +        if (od->nbs) {
> +            for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {
> +                const struct uuid *lb_uuid =
> +                    &od->nbs->load_balancer[i]->header_.uuid;
> +                lb = ovn_northd_lb_find(lbs, lb_uuid);
>   
> -        for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {
> -            const struct uuid *lb_uuid =
> -                &od->nbs->load_balancer[i]->header_.uuid;
> -            lb = ovn_northd_lb_find(lbs, lb_uuid);
> +                ovn_northd_lb_add_datapath(lb, od->sb);
> +            }
> +        } else {
> +            for (size_t i = 0; i < od->nbr->n_load_balancer; i++) {
> +                const struct uuid *lb_uuid =
> +                    &od->nbr->load_balancer[i]->header_.uuid;
> +                lb = ovn_northd_lb_find(lbs, lb_uuid);
>   
> -            ovn_northd_lb_add_datapath(lb, od->sb);
> +                ovn_northd_lb_add_datapath(lb, od->sb);
> +            }
>           }
>       }
>   
> @@ -3442,9 +3200,22 @@ build_ovn_lbs(struct northd_context *ctx, struct hmap *datapaths,
>           /* Store the fact that northd provides the original (destination IP +
>            * transport port) tuple.
>            */
> -        struct smap options;
> -        smap_clone(&options, &lb->nlb->options);
> -        smap_replace(&options, "hairpin_orig_tuple", "true");
> +        struct smap options = SMAP_INITIALIZER(&options);
> +        smap_add(&options, "hairpin_orig_tuple", "true");
> +
> +        for (size_t i = 0; i < lb->n_vips; i++) {
> +            struct ovn_northd_lb_vip *vip_nb = &lb->vips_nb[i];
> +            if (vip_nb->lb_health_check) {
> +                struct ovn_lb_vip *vip = &lb->vips[i];
> +                char *vip_key = xasprintf("%s_hc", vip->vip_str);
> +                smap_replace(&options, vip_key, "true");
> +                free(vip_key);
> +            }
> +        }
> +
> +        if (smap_get_bool(&lb->nlb->options, "reject", false)) {
> +            smap_add(&options, "reject", "true");
> +        }
>   
>           if (!lb->slb) {
>               sbrec_lb = sbrec_load_balancer_insert(ctx->ovnsb_txn);
> @@ -3463,29 +3234,44 @@ build_ovn_lbs(struct northd_context *ctx, struct hmap *datapaths,
>           sbrec_load_balancer_set_datapaths(
>               lb->slb, (struct sbrec_datapath_binding **)lb->dps,
>               lb->n_dps);
> +        sbrec_load_balancer_set_selection_fields(
> +            lb->slb, (char const **) lb->nlb->selection_fields,
> +            lb->nlb->n_selection_fields);
>           smap_destroy(&options);
>       }
>   
>       /* Set the list of associated load balanacers to a logical switch
>        * datapath binding in the SB DB. */
>       HMAP_FOR_EACH (od, key_node, datapaths) {
> -        if (!od->nbs) {
> -            continue;
> -        }
> +        if (od->nbs) {
> +            const struct sbrec_load_balancer **sbrec_lbs =
> +                xmalloc(od->nbs->n_load_balancer * sizeof *sbrec_lbs);
> +            for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {
> +                const struct uuid *lb_uuid =
> +                    &od->nbs->load_balancer[i]->header_.uuid;
> +                lb = ovn_northd_lb_find(lbs, lb_uuid);
> +                sbrec_lbs[i] = lb->slb;
> +            }
> +
> +            sbrec_datapath_binding_set_load_balancers(
> +                od->sb, (struct sbrec_load_balancer **)sbrec_lbs,
> +                od->nbs->n_load_balancer);
> +            free(sbrec_lbs);
> +        } else {
> +            const struct sbrec_load_balancer **sbrec_lbs =
> +                xmalloc(od->nbr->n_load_balancer * sizeof *sbrec_lbs);
> +            for (size_t i = 0; i < od->nbr->n_load_balancer; i++) {
> +                const struct uuid *lb_uuid =
> +                    &od->nbr->load_balancer[i]->header_.uuid;
> +                lb = ovn_northd_lb_find(lbs, lb_uuid);
> +                sbrec_lbs[i] = lb->slb;
> +            }
>   
> -        const struct sbrec_load_balancer **sbrec_lbs =
> -            xmalloc(od->nbs->n_load_balancer * sizeof *sbrec_lbs);
> -        for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {
> -            const struct uuid *lb_uuid =
> -                &od->nbs->load_balancer[i]->header_.uuid;
> -            lb = ovn_northd_lb_find(lbs, lb_uuid);
> -            sbrec_lbs[i] = lb->slb;
> +            sbrec_datapath_binding_set_load_balancers(
> +                od->sb, (struct sbrec_load_balancer **)sbrec_lbs,
> +                od->nbr->n_load_balancer);
> +            free(sbrec_lbs);
>           }
> -
> -        sbrec_datapath_binding_set_load_balancers(
> -            od->sb, (struct sbrec_load_balancer **)sbrec_lbs,
> -            od->nbs->n_load_balancer);
> -        free(sbrec_lbs);
>       }
>   }
>   
> @@ -3696,27 +3482,21 @@ struct multicast_group {
>       uint16_t key;               /* OVN_MIN_MULTICAST...OVN_MAX_MULTICAST. */
>   };
>   
> -#define MC_FLOOD "_MC_flood"
>   static const struct multicast_group mc_flood =
>       { MC_FLOOD, OVN_MCAST_FLOOD_TUNNEL_KEY };
>   
> -#define MC_MROUTER_FLOOD "_MC_mrouter_flood"
>   static const struct multicast_group mc_mrouter_flood =
>       { MC_MROUTER_FLOOD, OVN_MCAST_MROUTER_FLOOD_TUNNEL_KEY };
>   
> -#define MC_MROUTER_STATIC "_MC_mrouter_static"
>   static const struct multicast_group mc_mrouter_static =
>       { MC_MROUTER_STATIC, OVN_MCAST_MROUTER_STATIC_TUNNEL_KEY };
>   
> -#define MC_STATIC "_MC_static"
>   static const struct multicast_group mc_static =
>       { MC_STATIC, OVN_MCAST_STATIC_TUNNEL_KEY };
>   
> -#define MC_UNKNOWN "_MC_unknown"
>   static const struct multicast_group mc_unknown =
>       { MC_UNKNOWN, OVN_MCAST_UNKNOWN_TUNNEL_KEY };
>   
> -#define MC_FLOOD_L2 "_MC_flood_l2"
>   static const struct multicast_group mc_flood_l2 =
>       { MC_FLOOD_L2, OVN_MCAST_FLOOD_L2_TUNNEL_KEY };
>   
> @@ -4222,326 +4002,6 @@ ovn_lflow_destroy(struct hmap *lflows, struct ovn_lflow *lflow)
>       }
>   }
>   
> -/* Appends port security constraints on L2 address field 'eth_addr_field'
> - * (e.g. "eth.src" or "eth.dst") to 'match'.  'ps_addrs', with 'n_ps_addrs'
> - * elements, is the collection of port_security constraints from an
> - * OVN_NB Logical_Switch_Port row generated by extract_lsp_addresses(). */
> -static void
> -build_port_security_l2(const char *eth_addr_field,
> -                       struct lport_addresses *ps_addrs,
> -                       unsigned int n_ps_addrs,
> -                       struct ds *match)
> -{
> -    if (!n_ps_addrs) {
> -        return;
> -    }
> -
> -    ds_put_format(match, " && %s == {", eth_addr_field);
> -
> -    for (size_t i = 0; i < n_ps_addrs; i++) {
> -        ds_put_format(match, "%s ", ps_addrs[i].ea_s);
> -    }
> -    ds_chomp(match, ' ');
> -    ds_put_cstr(match, "}");
> -}
> -
> -static void
> -build_port_security_ipv6_nd_flow(
> -    struct ds *match, struct eth_addr ea, struct ipv6_netaddr *ipv6_addrs,
> -    int n_ipv6_addrs)
> -{
> -    ds_put_format(match, " && ip6 && nd && ((nd.sll == "ETH_ADDR_FMT" || "
> -                  "nd.sll == "ETH_ADDR_FMT") || ((nd.tll == "ETH_ADDR_FMT" || "
> -                  "nd.tll == "ETH_ADDR_FMT")", ETH_ADDR_ARGS(eth_addr_zero),
> -                  ETH_ADDR_ARGS(ea), ETH_ADDR_ARGS(eth_addr_zero),
> -                  ETH_ADDR_ARGS(ea));
> -    if (!n_ipv6_addrs) {
> -        ds_put_cstr(match, "))");
> -        return;
> -    }
> -
> -    char ip6_str[INET6_ADDRSTRLEN + 1];
> -    struct in6_addr lla;
> -    in6_generate_lla(ea, &lla);
> -    memset(ip6_str, 0, sizeof(ip6_str));
> -    ipv6_string_mapped(ip6_str, &lla);
> -    ds_put_format(match, " && (nd.target == %s", ip6_str);
> -
> -    for (size_t i = 0; i < n_ipv6_addrs; i++) {
> -        /* When the netmask is applied, if the host portion is
> -         * non-zero, the host can only use the specified
> -         * address in the nd.target.  If zero, the host is allowed
> -         * to use any address in the subnet.
> -         */
> -        if (ipv6_addrs[i].plen == 128
> -            || !ipv6_addr_is_host_zero(&ipv6_addrs[i].addr,
> -                                       &ipv6_addrs[i].mask)) {
> -            ds_put_format(match, " || nd.target == %s", ipv6_addrs[i].addr_s);
> -        } else {
> -            ds_put_format(match, " || nd.target == %s/%d",
> -                          ipv6_addrs[i].network_s, ipv6_addrs[i].plen);
> -        }
> -    }
> -
> -    ds_put_format(match, ")))");
> -}
> -
> -static void
> -build_port_security_ipv6_flow(
> -    enum ovn_pipeline pipeline, struct ds *match, struct eth_addr ea,
> -    struct ipv6_netaddr *ipv6_addrs, int n_ipv6_addrs)
> -{
> -    char ip6_str[INET6_ADDRSTRLEN + 1];
> -
> -    ds_put_format(match, " && %s == {",
> -                  pipeline == P_IN ? "ip6.src" : "ip6.dst");
> -
> -    /* Allow link-local address. */
> -    struct in6_addr lla;
> -    in6_generate_lla(ea, &lla);
> -    ipv6_string_mapped(ip6_str, &lla);
> -    ds_put_format(match, "%s, ", ip6_str);
> -
> -    /* Allow ip6.dst=ff00::/8 for multicast packets */
> -    if (pipeline == P_OUT) {
> -        ds_put_cstr(match, "ff00::/8, ");
> -    }
> -    for (size_t i = 0; i < n_ipv6_addrs; i++) {
> -        /* When the netmask is applied, if the host portion is
> -         * non-zero, the host can only use the specified
> -         * address.  If zero, the host is allowed to use any
> -         * address in the subnet.
> -         */
> -        if (ipv6_addrs[i].plen == 128
> -            || !ipv6_addr_is_host_zero(&ipv6_addrs[i].addr,
> -                                       &ipv6_addrs[i].mask)) {
> -            ds_put_format(match, "%s, ", ipv6_addrs[i].addr_s);
> -        } else {
> -            ds_put_format(match, "%s/%d, ", ipv6_addrs[i].network_s,
> -                          ipv6_addrs[i].plen);
> -        }
> -    }
> -    /* Replace ", " by "}". */
> -    ds_chomp(match, ' ');
> -    ds_chomp(match, ',');
> -    ds_put_cstr(match, "}");
> -}
> -
> -/**
> - * Build port security constraints on ARP and IPv6 ND fields
> - * and add logical flows to S_SWITCH_IN_PORT_SEC_ND stage.
> - *
> - * For each port security of the logical port, following
> - * logical flows are added
> - *   - If the port security has no IP (both IPv4 and IPv6) or
> - *     if it has IPv4 address(es)
> - *      - Priority 90 flow to allow ARP packets for known MAC addresses
> - *        in the eth.src and arp.spa fields. If the port security
> - *        has IPv4 addresses, allow known IPv4 addresses in the arp.tpa field.
> - *
> - *   - If the port security has no IP (both IPv4 and IPv6) or
> - *     if it has IPv6 address(es)
> - *     - Priority 90 flow to allow IPv6 ND packets for known MAC addresses
> - *       in the eth.src and nd.sll/nd.tll fields. If the port security
> - *       has IPv6 addresses, allow known IPv6 addresses in the nd.target field
> - *       for IPv6 Neighbor Advertisement packet.
> - *
> - *   - Priority 80 flow to drop ARP and IPv6 ND packets.
> - */
> -static void
> -build_port_security_nd(struct ovn_port *op, struct hmap *lflows,
> -                       const struct ovsdb_idl_row *stage_hint)
> -{
> -    struct ds match = DS_EMPTY_INITIALIZER;
> -
> -    for (size_t i = 0; i < op->n_ps_addrs; i++) {
> -        struct lport_addresses *ps = &op->ps_addrs[i];
> -
> -        bool no_ip = !(ps->n_ipv4_addrs || ps->n_ipv6_addrs);
> -
> -        ds_clear(&match);
> -        if (ps->n_ipv4_addrs || no_ip) {
> -            ds_put_format(&match,
> -                          "inport == %s && eth.src == %s && arp.sha == %s",
> -                          op->json_key, ps->ea_s, ps->ea_s);
> -
> -            if (ps->n_ipv4_addrs) {
> -                ds_put_cstr(&match, " && arp.spa == {");
> -                for (size_t j = 0; j < ps->n_ipv4_addrs; j++) {
> -                    /* When the netmask is applied, if the host portion is
> -                     * non-zero, the host can only use the specified
> -                     * address in the arp.spa.  If zero, the host is allowed
> -                     * to use any address in the subnet. */
> -                    if (ps->ipv4_addrs[j].plen == 32
> -                        || ps->ipv4_addrs[j].addr & ~ps->ipv4_addrs[j].mask) {
> -                        ds_put_cstr(&match, ps->ipv4_addrs[j].addr_s);
> -                    } else {
> -                        ds_put_format(&match, "%s/%d",
> -                                      ps->ipv4_addrs[j].network_s,
> -                                      ps->ipv4_addrs[j].plen);
> -                    }
> -                    ds_put_cstr(&match, ", ");
> -                }
> -                ds_chomp(&match, ' ');
> -                ds_chomp(&match, ',');
> -                ds_put_cstr(&match, "}");
> -            }
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_PORT_SEC_ND,
> -                                    90, ds_cstr(&match), "next;", stage_hint);
> -        }
> -
> -        if (ps->n_ipv6_addrs || no_ip) {
> -            ds_clear(&match);
> -            ds_put_format(&match, "inport == %s && eth.src == %s",
> -                          op->json_key, ps->ea_s);
> -            build_port_security_ipv6_nd_flow(&match, ps->ea, ps->ipv6_addrs,
> -                                             ps->n_ipv6_addrs);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_PORT_SEC_ND,
> -                                    90, ds_cstr(&match), "next;", stage_hint);
> -        }
> -    }
> -
> -    ds_clear(&match);
> -    ds_put_format(&match, "inport == %s && (arp || nd)", op->json_key);
> -    ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_PORT_SEC_ND, 80,
> -                            ds_cstr(&match), "drop;", stage_hint);
> -    ds_destroy(&match);
> -}
> -
> -/**
> - * Build port security constraints on IPv4 and IPv6 src and dst fields
> - * and add logical flows to S_SWITCH_(IN/OUT)_PORT_SEC_IP stage.
> - *
> - * For each port security of the logical port, following
> - * logical flows are added
> - *   - If the port security has IPv4 addresses,
> - *     - Priority 90 flow to allow IPv4 packets for known IPv4 addresses
> - *
> - *   - If the port security has IPv6 addresses,
> - *     - Priority 90 flow to allow IPv6 packets for known IPv6 addresses
> - *
> - *   - If the port security has IPv4 addresses or IPv6 addresses or both
> - *     - Priority 80 flow to drop all IPv4 and IPv6 traffic
> - */
> -static void
> -build_port_security_ip(enum ovn_pipeline pipeline, struct ovn_port *op,
> -                       struct hmap *lflows,
> -                       const struct ovsdb_idl_row *stage_hint)
> -{
> -    char *port_direction;
> -    enum ovn_stage stage;
> -    if (pipeline == P_IN) {
> -        port_direction = "inport";
> -        stage = S_SWITCH_IN_PORT_SEC_IP;
> -    } else {
> -        port_direction = "outport";
> -        stage = S_SWITCH_OUT_PORT_SEC_IP;
> -    }
> -
> -    for (size_t i = 0; i < op->n_ps_addrs; i++) {
> -        struct lport_addresses *ps = &op->ps_addrs[i];
> -
> -        if (!(ps->n_ipv4_addrs || ps->n_ipv6_addrs)) {
> -            continue;
> -        }
> -
> -        if (ps->n_ipv4_addrs) {
> -            struct ds match = DS_EMPTY_INITIALIZER;
> -            if (pipeline == P_IN) {
> -                /* Permit use of the unspecified address for DHCP discovery */
> -                struct ds dhcp_match = DS_EMPTY_INITIALIZER;
> -                ds_put_format(&dhcp_match, "inport == %s"
> -                              " && eth.src == %s"
> -                              " && ip4.src == 0.0.0.0"
> -                              " && ip4.dst == 255.255.255.255"
> -                              " && udp.src == 68 && udp.dst == 67",
> -                              op->json_key, ps->ea_s);
> -                ovn_lflow_add_with_hint(lflows, op->od, stage, 90,
> -                                        ds_cstr(&dhcp_match), "next;",
> -                                        stage_hint);
> -                ds_destroy(&dhcp_match);
> -                ds_put_format(&match, "inport == %s && eth.src == %s"
> -                              " && ip4.src == {", op->json_key,
> -                              ps->ea_s);
> -            } else {
> -                ds_put_format(&match, "outport == %s && eth.dst == %s"
> -                              " && ip4.dst == {255.255.255.255, 224.0.0.0/4, ",
> -                              op->json_key, ps->ea_s);
> -            }
> -
> -            for (int j = 0; j < ps->n_ipv4_addrs; j++) {
> -                ovs_be32 mask = ps->ipv4_addrs[j].mask;
> -                /* When the netmask is applied, if the host portion is
> -                 * non-zero, the host can only use the specified
> -                 * address.  If zero, the host is allowed to use any
> -                 * address in the subnet.
> -                 */
> -                if (ps->ipv4_addrs[j].plen == 32
> -                    || ps->ipv4_addrs[j].addr & ~mask) {
> -                    ds_put_format(&match, "%s", ps->ipv4_addrs[j].addr_s);
> -                    if (pipeline == P_OUT && ps->ipv4_addrs[j].plen != 32) {
> -                        /* Host is also allowed to receive packets to the
> -                         * broadcast address in the specified subnet. */
> -                        ds_put_format(&match, ", %s",
> -                                      ps->ipv4_addrs[j].bcast_s);
> -                    }
> -                } else {
> -                    /* host portion is zero */
> -                    ds_put_format(&match, "%s/%d", ps->ipv4_addrs[j].network_s,
> -                                  ps->ipv4_addrs[j].plen);
> -                }
> -                ds_put_cstr(&match, ", ");
> -            }
> -
> -            /* Replace ", " by "}". */
> -            ds_chomp(&match, ' ');
> -            ds_chomp(&match, ',');
> -            ds_put_cstr(&match, "}");
> -            ovn_lflow_add_with_hint(lflows, op->od, stage, 90,
> -                                    ds_cstr(&match), "next;",
> -                                    stage_hint);
> -            ds_destroy(&match);
> -        }
> -
> -        if (ps->n_ipv6_addrs) {
> -            struct ds match = DS_EMPTY_INITIALIZER;
> -            if (pipeline == P_IN) {
> -                /* Permit use of unspecified address for duplicate address
> -                 * detection */
> -                struct ds dad_match = DS_EMPTY_INITIALIZER;
> -                ds_put_format(&dad_match, "inport == %s"
> -                              " && eth.src == %s"
> -                              " && ip6.src == ::"
> -                              " && ip6.dst == ff02::/16"
> -                              " && icmp6.type == {131, 135, 143}", op->json_key,
> -                              ps->ea_s);
> -                ovn_lflow_add_with_hint(lflows, op->od, stage, 90,
> -                                        ds_cstr(&dad_match), "next;",
> -                                        stage_hint);
> -                ds_destroy(&dad_match);
> -            }
> -            ds_put_format(&match, "%s == %s && %s == %s",
> -                          port_direction, op->json_key,
> -                          pipeline == P_IN ? "eth.src" : "eth.dst", ps->ea_s);
> -            build_port_security_ipv6_flow(pipeline, &match, ps->ea,
> -                                          ps->ipv6_addrs, ps->n_ipv6_addrs);
> -            ovn_lflow_add_with_hint(lflows, op->od, stage, 90,
> -                                    ds_cstr(&match), "next;",
> -                                    stage_hint);
> -            ds_destroy(&match);
> -        }
> -
> -        char *match = xasprintf("%s == %s && %s == %s && ip",
> -                                port_direction, op->json_key,
> -                                pipeline == P_IN ? "eth.src" : "eth.dst",
> -                                ps->ea_s);
> -        ovn_lflow_add_with_hint(lflows, op->od, stage, 80, match, "drop;",
> -                                stage_hint);
> -        free(match);
> -    }
> -
> -}
> -
>   static bool
>   build_dhcpv4_action(struct ovn_port *op, ovs_be32 offer_ip,
>                       struct ds *options_action, struct ds *response_action,
> @@ -4821,166 +4281,6 @@ ls_get_acl_flags(struct ovn_datapath *od)
>       }
>   }
>   
> -/* Logical switch ingress table 0: Ingress port security - L2
> - *  (priority 50).
> - *  Ingress table 1: Ingress port security - IP (priority 90 and 80)
> - *  Ingress table 2: Ingress port security - ND (priority 90 and 80)
> - */
> -static void
> -build_lswitch_input_port_sec_op(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *actions, struct ds *match)
> -{
> -
> -    if (!op->nbsp) {
> -        return;
> -    }
> -
> -    if (!lsp_is_enabled(op->nbsp)) {
> -        /* Drop packets from disabled logical ports (since logical flow
> -         * tables are default-drop). */
> -        return;
> -    }
> -
> -    if (lsp_is_external(op->nbsp)) {
> -        return;
> -    }
> -
> -    ds_clear(match);
> -    ds_clear(actions);
> -    ds_put_format(match, "inport == %s", op->json_key);
> -    build_port_security_l2("eth.src", op->ps_addrs, op->n_ps_addrs,
> -                           match);
> -
> -    const char *queue_id = smap_get(&op->sb->options, "qdisc_queue_id");
> -    if (queue_id) {
> -        ds_put_format(actions, "set_queue(%s); ", queue_id);
> -    }
> -    ds_put_cstr(actions, "next;");
> -    ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_PORT_SEC_L2, 50,
> -                            ds_cstr(match), ds_cstr(actions),
> -                            &op->nbsp->header_);
> -
> -    if (op->nbsp->n_port_security) {
> -        build_port_security_ip(P_IN, op, lflows, &op->nbsp->header_);
> -        build_port_security_nd(op, lflows, &op->nbsp->header_);
> -    }
> -}
> -
> -/* Ingress table 1 and 2: Port security - IP and ND, by default
> - * goto next. (priority 0)
> - */
> -static void
> -build_lswitch_input_port_sec_od(
> -        struct ovn_datapath *od, struct hmap *lflows)
> -{
> -
> -    if (od->nbs) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_ND, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_IP, 0, "1", "next;");
> -    }
> -}
> -
> -static void
> -build_lswitch_learn_fdb_op(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *actions, struct ds *match)
> -{
> -    if (op->nbsp && !op->n_ps_addrs && !strcmp(op->nbsp->type, "") &&
> -        op->has_unknown) {
> -        ds_clear(match);
> -        ds_clear(actions);
> -        ds_put_format(match, "inport == %s", op->json_key);
> -        ds_put_format(actions, REGBIT_LKUP_FDB
> -                      " = lookup_fdb(inport, eth.src); next;");
> -        ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_LOOKUP_FDB, 100,
> -                                ds_cstr(match), ds_cstr(actions),
> -                                &op->nbsp->header_);
> -
> -        ds_put_cstr(match, " && "REGBIT_LKUP_FDB" == 0");
> -        ds_clear(actions);
> -        ds_put_cstr(actions, "put_fdb(inport, eth.src); next;");
> -        ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_PUT_FDB, 100,
> -                                ds_cstr(match), ds_cstr(actions),
> -                                &op->nbsp->header_);
> -    }
> -}
> -
> -static void
> -build_lswitch_learn_fdb_od(
> -        struct ovn_datapath *od, struct hmap *lflows)
> -{
> -
> -    if (od->nbs) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_LOOKUP_FDB, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PUT_FDB, 0, "1", "next;");
> -    }
> -}
> -
> -/* Egress table 8: Egress port security - IP (priorities 90 and 80)
> - * if port security enabled.
> - *
> - * Egress table 9: Egress port security - L2 (priorities 50 and 150).
> - *
> - * Priority 50 rules implement port security for enabled logical port.
> - *
> - * Priority 150 rules drop packets to disabled logical ports, so that
> - * they don't even receive multicast or broadcast packets.
> - */
> -static void
> -build_lswitch_output_port_sec_op(struct ovn_port *op,
> -                                 struct hmap *lflows,
> -                                 struct ds *match,
> -                                 struct ds *actions)
> -{
> -
> -    if (op->nbsp && (!lsp_is_external(op->nbsp))) {
> -
> -        ds_clear(actions);
> -        ds_clear(match);
> -
> -        ds_put_format(match, "outport == %s", op->json_key);
> -        if (lsp_is_enabled(op->nbsp)) {
> -            build_port_security_l2("eth.dst", op->ps_addrs, op->n_ps_addrs,
> -                                   match);
> -
> -            if (!strcmp(op->nbsp->type, "localnet")) {
> -                const char *queue_id = smap_get(&op->sb->options,
> -                                                "qdisc_queue_id");
> -                if (queue_id) {
> -                    ds_put_format(actions, "set_queue(%s); ", queue_id);
> -                }
> -            }
> -            ds_put_cstr(actions, "output;");
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_OUT_PORT_SEC_L2,
> -                                    50, ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbsp->header_);
> -        } else {
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_OUT_PORT_SEC_L2,
> -                                    150, ds_cstr(match), "drop;",
> -                                    &op->nbsp->header_);
> -        }
> -
> -        if (op->nbsp->n_port_security) {
> -            build_port_security_ip(P_OUT, op, lflows, &op->nbsp->header_);
> -        }
> -    }
> -}
> -
> -/* Egress tables 8: Egress port security - IP (priority 0)
> - * Egress table 9: Egress port security L2 - multicast/broadcast
> - *                 (priority 100). */
> -static void
> -build_lswitch_output_port_sec_od(struct ovn_datapath *od,
> -                              struct hmap *lflows)
> -{
> -    if (od->nbs) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_PORT_SEC_IP, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_PORT_SEC_L2, 100, "eth.mcast",
> -                      "output;");
> -    }
> -}
> -
>   static void
>   skip_port_from_conntrack(struct ovn_datapath *od, struct ovn_port *op,
>                            enum ovn_stage in_stage, enum ovn_stage out_stage,
> @@ -5061,9 +4361,6 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *port_groups,
>   {
>       /* Ingress and Egress Pre-ACL Table (Priority 0): Packets are
>        * allowed by default. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_ACL, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 0, "1", "next;");
> -
>       ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_ACL, 110,
>                     "eth.dst == $svc_monitor_mac", "next;");
>   
> @@ -5074,11 +4371,6 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *port_groups,
>        * send IP packets for some (allow) filters through the conntrack action,
>        * which handles defragmentation, in order to match L4 headers. */
>       if (od->has_stateful_acl) {
> -        for (size_t i = 0; i < od->n_router_ports; i++) {
> -            skip_port_from_conntrack(od, od->router_ports[i],
> -                                     S_SWITCH_IN_PRE_ACL, S_SWITCH_OUT_PRE_ACL,
> -                                     110, lflows);
> -        }
>           for (size_t i = 0; i < od->n_localnet_ports; i++) {
>               skip_port_from_conntrack(od, od->localnet_ports[i],
>                                        S_SWITCH_IN_PRE_ACL, S_SWITCH_OUT_PRE_ACL,
> @@ -5087,30 +4379,6 @@ build_pre_acls(struct ovn_datapath *od, struct hmap *port_groups,
>   
>           /* stateless filters always take precedence over stateful ACLs. */
>           build_stateless_filters(od, port_groups, lflows);
> -
> -        /* Ingress and Egress Pre-ACL Table (Priority 110).
> -         *
> -         * Not to do conntrack on ND and ICMP destination
> -         * unreachable packets. */
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_ACL, 110,
> -                      "nd || nd_rs || nd_ra || mldv1 || mldv2 || "
> -                      "(udp && udp.src == 546 && udp.dst == 547)", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 110,
> -                      "nd || nd_rs || nd_ra || mldv1 || mldv2 || "
> -                      "(udp && udp.src == 546 && udp.dst == 547)", "next;");
> -
> -        /* Ingress and Egress Pre-ACL Table (Priority 100).
> -         *
> -         * Regardless of whether the ACL is "from-lport" or "to-lport",
> -         * we need rules in both the ingress and egress table, because
> -         * the return traffic needs to be followed.
> -         *
> -         * 'REGBIT_CONNTRACK_DEFRAG' is set to let the pre-stateful table send
> -         * it to conntrack for tracking and defragmentation. */
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_ACL, 100, "ip",
> -                      REGBIT_CONNTRACK_DEFRAG" = 1; next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_ACL, 100, "ip",
> -                      REGBIT_CONNTRACK_DEFRAG" = 1; next;");
>       }
>   }
>   
> @@ -5195,36 +4463,12 @@ static void
>   build_pre_lb(struct ovn_datapath *od, struct hmap *lflows,
>                struct shash *meter_groups, struct hmap *lbs)
>   {
> -    /* Do not send ND packets to conntrack */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, 110,
> -                  "nd || nd_rs || nd_ra || mldv1 || mldv2",
> -                  "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB, 110,
> -                  "nd || nd_rs || nd_ra || mldv1 || mldv2",
> -                  "next;");
> -
> -    /* Do not send service monitor packets to conntrack. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, 110,
> -                  "eth.dst == $svc_monitor_mac", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB, 110,
> -                  "eth.src == $svc_monitor_mac", "next;");
> -
> -    /* Allow all packets to go to next tables by default. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB, 0, "1", "next;");
> -
> -    for (size_t i = 0; i < od->n_router_ports; i++) {
> -        skip_port_from_conntrack(od, od->router_ports[i],
> -                                 S_SWITCH_IN_PRE_LB, S_SWITCH_OUT_PRE_LB,
> -                                 110, lflows);
> -    }
>       for (size_t i = 0; i < od->n_localnet_ports; i++) {
>           skip_port_from_conntrack(od, od->localnet_ports[i],
>                                    S_SWITCH_IN_PRE_LB, S_SWITCH_OUT_PRE_LB,
>                                    110, lflows);
>       }
>   
> -    bool vip_configured = false;
>       for (int i = 0; i < od->nbs->n_load_balancer; i++) {
>           struct nbrec_load_balancer *nb_lb = od->nbs->load_balancer[i];
>           struct ovn_northd_lb *lb =
> @@ -5241,194 +4485,6 @@ build_pre_lb(struct ovn_datapath *od, struct hmap *lflows,
>                * the packet through ct() action to de-fragment. In stateful
>                * table, we will eventually look at L4 information. */
>           }
> -
> -        vip_configured = (vip_configured || lb->n_vips);
> -    }
> -
> -    /* 'REGBIT_CONNTRACK_NAT' is set to let the pre-stateful table send
> -     * packet to conntrack for defragmentation and possibly for unNATting.
> -     *
> -     * Send all the packets to conntrack in the ingress pipeline if the
> -     * logical switch has a load balancer with VIP configured. Earlier
> -     * we used to set the REGBIT_CONNTRACK_DEFRAG flag in the ingress pipeline
> -     * if the IP destination matches the VIP. But this causes few issues when
> -     * a logical switch has no ACLs configured with allow-related.
> -     * To understand the issue, lets a take a TCP load balancer -
> -     * 10.0.0.10:80=10.0.0.3:80.
> -     * If a logical port - p1 with IP - 10.0.0.5 opens a TCP connection with
> -     * the VIP - 10.0.0.10, then the packet in the ingress pipeline of 'p1'
> -     * is sent to the p1's conntrack zone id and the packet is load balanced
> -     * to the backend - 10.0.0.3. For the reply packet from the backend lport,
> -     * it is not sent to the conntrack of backend lport's zone id. This is fine
> -     * as long as the packet is valid. Suppose the backend lport sends an
> -     *  invalid TCP packet (like incorrect sequence number), the packet gets
> -     * delivered to the lport 'p1' without unDNATing the packet to the
> -     * VIP - 10.0.0.10. And this causes the connection to be reset by the
> -     * lport p1's VIF.
> -     *
> -     * We can't fix this issue by adding a logical flow to drop ct.inv packets
> -     * in the egress pipeline since it will drop all other connections not
> -     * destined to the load balancers.
> -     *
> -     * To fix this issue, we send all the packets to the conntrack in the
> -     * ingress pipeline if a load balancer is configured. We can now
> -     * add a lflow to drop ct.inv packets.
> -     */
> -    if (vip_configured) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_LB,
> -                      100, "ip", REGBIT_CONNTRACK_NAT" = 1; next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_LB,
> -                      100, "ip", REGBIT_CONNTRACK_NAT" = 1; next;");
> -    }
> -}
> -
> -static void
> -build_pre_stateful(struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    /* Ingress and Egress pre-stateful Table (Priority 0): Packets are
> -     * allowed by default. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 0, "1", "next;");
> -
> -    const char *lb_protocols[] = {"tcp", "udp", "sctp"};
> -    struct ds actions = DS_EMPTY_INITIALIZER;
> -    struct ds match = DS_EMPTY_INITIALIZER;
> -
> -    for (size_t i = 0; i < ARRAY_SIZE(lb_protocols); i++) {
> -        ds_clear(&match);
> -        ds_clear(&actions);
> -        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip4 && %s",
> -                      lb_protocols[i]);
> -        ds_put_format(&actions, REG_ORIG_DIP_IPV4 " = ip4.dst; "
> -                                REG_ORIG_TP_DPORT " = %s.dst; ct_lb;",
> -                      lb_protocols[i]);
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
> -                      ds_cstr(&match), ds_cstr(&actions));
> -
> -        ds_clear(&match);
> -        ds_clear(&actions);
> -        ds_put_format(&match, REGBIT_CONNTRACK_NAT" == 1 && ip6 && %s",
> -                      lb_protocols[i]);
> -        ds_put_format(&actions, REG_ORIG_DIP_IPV6 " = ip6.dst; "
> -                                REG_ORIG_TP_DPORT " = %s.dst; ct_lb;",
> -                      lb_protocols[i]);
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 120,
> -                      ds_cstr(&match), ds_cstr(&actions));
> -    }
> -
> -    ds_destroy(&actions);
> -    ds_destroy(&match);
> -
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 110,
> -                  REGBIT_CONNTRACK_NAT" == 1", "ct_lb;");
> -
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 110,
> -                  REGBIT_CONNTRACK_NAT" == 1", "ct_lb;");
> -
> -    /* If REGBIT_CONNTRACK_DEFRAG is set as 1, then the packets should be
> -     * sent to conntrack for tracking and defragmentation. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_STATEFUL, 100,
> -                  REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
> -
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_PRE_STATEFUL, 100,
> -                  REGBIT_CONNTRACK_DEFRAG" == 1", "ct_next;");
> -}
> -
> -static void
> -build_acl_hints(struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    /* This stage builds hints for the IN/OUT_ACL stage. Based on various
> -     * combinations of ct flags packets may hit only a subset of the logical
> -     * flows in the IN/OUT_ACL stage.
> -     *
> -     * Populating ACL hints first and storing them in registers simplifies
> -     * the logical flow match expressions in the IN/OUT_ACL stage and
> -     * generates less openflows.
> -     *
> -     * Certain combinations of ct flags might be valid matches for multiple
> -     * types of ACL logical flows (e.g., allow/drop). In such cases hints
> -     * corresponding to all potential matches are set.
> -     */
> -
> -    enum ovn_stage stages[] = {
> -        S_SWITCH_IN_ACL_HINT,
> -        S_SWITCH_OUT_ACL_HINT,
> -    };
> -
> -    for (size_t i = 0; i < ARRAY_SIZE(stages); i++) {
> -        enum ovn_stage stage = stages[i];
> -
> -        /* In any case, advance to the next stage. */
> -        if (!od->has_acls && !od->has_lb_vip) {
> -            ovn_lflow_add(lflows, od, stage, UINT16_MAX, "1", "next;");
> -        } else {
> -            ovn_lflow_add(lflows, od, stage, 0, "1", "next;");
> -        }
> -
> -        if (!od->has_stateful_acl && !od->has_lb_vip) {
> -            continue;
> -        }
> -
> -        /* New, not already established connections, may hit either allow
> -         * or drop ACLs. For allow ACLs, the connection must also be committed
> -         * to conntrack so we set REGBIT_ACL_HINT_ALLOW_NEW.
> -         */
> -        ovn_lflow_add(lflows, od, stage, 7, "ct.new && !ct.est",
> -                      REGBIT_ACL_HINT_ALLOW_NEW " = 1; "
> -                      REGBIT_ACL_HINT_DROP " = 1; "
> -                      "next;");
> -
> -        /* Already established connections in the "request" direction that
> -         * are already marked as "blocked" may hit either:
> -         * - allow ACLs for connections that were previously allowed by a
> -         *   policy that was deleted and is being readded now. In this case
> -         *   the connection should be recommitted so we set
> -         *   REGBIT_ACL_HINT_ALLOW_NEW.
> -         * - drop ACLs.
> -         */
> -        ovn_lflow_add(lflows, od, stage, 6,
> -                      "!ct.new && ct.est && !ct.rpl && ct_label.blocked == 1",
> -                      REGBIT_ACL_HINT_ALLOW_NEW " = 1; "
> -                      REGBIT_ACL_HINT_DROP " = 1; "
> -                      "next;");
> -
> -        /* Not tracked traffic can either be allowed or dropped. */
> -        ovn_lflow_add(lflows, od, stage, 5, "!ct.trk",
> -                      REGBIT_ACL_HINT_ALLOW " = 1; "
> -                      REGBIT_ACL_HINT_DROP " = 1; "
> -                      "next;");
> -
> -        /* Already established connections in the "request" direction may hit
> -         * either:
> -         * - allow ACLs in which case the traffic should be allowed so we set
> -         *   REGBIT_ACL_HINT_ALLOW.
> -         * - drop ACLs in which case the traffic should be blocked and the
> -         *   connection must be committed with ct_label.blocked set so we set
> -         *   REGBIT_ACL_HINT_BLOCK.
> -         */
> -        ovn_lflow_add(lflows, od, stage, 4,
> -                      "!ct.new && ct.est && !ct.rpl && ct_label.blocked == 0",
> -                      REGBIT_ACL_HINT_ALLOW " = 1; "
> -                      REGBIT_ACL_HINT_BLOCK " = 1; "
> -                      "next;");
> -
> -        /* Not established or established and already blocked connections may
> -         * hit drop ACLs.
> -         */
> -        ovn_lflow_add(lflows, od, stage, 3, "!ct.est",
> -                      REGBIT_ACL_HINT_DROP " = 1; "
> -                      "next;");
> -        ovn_lflow_add(lflows, od, stage, 2, "ct.est && ct_label.blocked == 1",
> -                      REGBIT_ACL_HINT_DROP " = 1; "
> -                      "next;");
> -
> -        /* Established connections that were previously allowed might hit
> -         * drop ACLs in which case the connection must be committed with
> -         * ct_label.blocked set.
> -         */
> -        ovn_lflow_add(lflows, od, stage, 1, "ct.est && ct_label.blocked == 0",
> -                      REGBIT_ACL_HINT_BLOCK " = 1; "
> -                      "next;");
>       }
>   }
>   
> @@ -5770,115 +4826,7 @@ static void
>   build_acls(struct ovn_datapath *od, struct hmap *lflows,
>              struct hmap *port_groups, const struct shash *meter_groups)
>   {
> -    bool has_stateful = od->has_stateful_acl || od->has_lb_vip;
> -
> -    /* Ingress and Egress ACL Table (Priority 0): Packets are allowed by
> -     * default.  If the logical switch has no ACLs or no load balancers,
> -     * then add 65535-priority flow to advance the packet to next
> -     * stage.
> -     *
> -     * A related rule at priority 1 is added below if there
> -     * are any stateful ACLs in this datapath. */
> -    if (!od->has_acls && !od->has_lb_vip) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, UINT16_MAX, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, UINT16_MAX, "1", "next;");
> -    } else {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, 0, "1", "next;");
> -    }
> -
> -    if (has_stateful) {
> -        /* Ingress and Egress ACL Table (Priority 1).
> -         *
> -         * By default, traffic is allowed.  This is partially handled by
> -         * the Priority 0 ACL flows added earlier, but we also need to
> -         * commit IP flows.  This is because, while the initiater's
> -         * direction may not have any stateful rules, the server's may
> -         * and then its return traffic would not have an associated
> -         * conntrack entry and would return "+invalid".
> -         *
> -         * We use "ct_commit" for a connection that is not already known
> -         * by the connection tracker.  Once a connection is committed,
> -         * subsequent packets will hit the flow at priority 0 that just
> -         * uses "next;"
> -         *
> -         * We also check for established connections that have ct_label.blocked
> -         * set on them.  That's a connection that was disallowed, but is
> -         * now allowed by policy again since it hit this default-allow flow.
> -         * We need to set ct_label.blocked=0 to let the connection continue,
> -         * which will be done by ct_commit() in the "stateful" stage.
> -         * Subsequent packets will hit the flow at priority 0 that just
> -         * uses "next;". */
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, 1,
> -                      "ip && (!ct.est || (ct.est && ct_label.blocked == 1))",
> -                       REGBIT_CONNTRACK_COMMIT" = 1; next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, 1,
> -                      "ip && (!ct.est || (ct.est && ct_label.blocked == 1))",
> -                       REGBIT_CONNTRACK_COMMIT" = 1; next;");
> -
> -        /* Ingress and Egress ACL Table (Priority 65532).
> -         *
> -         * Always drop traffic that's in an invalid state.  Also drop
> -         * reply direction packets for connections that have been marked
> -         * for deletion (bit 0 of ct_label is set).
> -         *
> -         * This is enforced at a higher priority than ACLs can be defined. */
> -        char *match =
> -            xasprintf("%s(ct.est && ct.rpl && ct_label.blocked == 1)",
> -                      use_ct_inv_match ? "ct.inv || " : "");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> -                      match, "drop;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> -                      match, "drop;");
> -        free(match);
> -
> -        /* Ingress and Egress ACL Table (Priority 65535 - 3).
> -         *
> -         * Allow reply traffic that is part of an established
> -         * conntrack entry that has not been marked for deletion
> -         * (bit 0 of ct_label).  We only match traffic in the
> -         * reply direction because we want traffic in the request
> -         * direction to hit the currently defined policy from ACLs.
> -         *
> -         * This is enforced at a higher priority than ACLs can be defined. */
> -        match = xasprintf("ct.est && !ct.rel && !ct.new%s && "
> -                          "ct.rpl && ct_label.blocked == 0",
> -                          use_ct_inv_match ? " && !ct.inv" : "");
> -
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> -                      match, "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> -                      match, "next;");
> -        free(match);
> -
> -        /* Ingress and Egress ACL Table (Priority 65535).
> -         *
> -         * Allow traffic that is related to an existing conntrack entry that
> -         * has not been marked for deletion (bit 0 of ct_label).
> -         *
> -         * This is enforced at a higher priority than ACLs can be defined.
> -         *
> -         * NOTE: This does not support related data sessions (eg,
> -         * a dynamically negotiated FTP data channel), but will allow
> -         * related traffic such as an ICMP Port Unreachable through
> -         * that's generated from a non-listening UDP port.  */
> -        match = xasprintf("!ct.est && ct.rel && !ct.new%s && "
> -                          "ct_label.blocked == 0",
> -                          use_ct_inv_match ? " && !ct.inv" : "");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> -                      match, "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> -                      match, "next;");
> -        free(match);
> -
> -        /* Ingress and Egress ACL Table (Priority 65532).
> -         *
> -         * Not to do conntrack on ND packets. */
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, UINT16_MAX - 3,
> -                      "nd || nd_ra || nd_rs || mldv1 || mldv2", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, UINT16_MAX - 3,
> -                      "nd || nd_ra || nd_rs || mldv1 || mldv2", "next;");
> -    }
> +    bool has_stateful = (od->has_stateful_acl || od->has_lb_vip);
>   
>       /* Ingress or Egress ACL Table (Various priorities). */
>       for (size_t i = 0; i < od->nbs->n_acls; i++) {
> @@ -5953,37 +4901,22 @@ build_acls(struct ovn_datapath *od, struct hmap *lflows,
>           }
>       }
>   
> -    /* Add a 34000 priority flow to advance the DNS reply from ovn-controller,
> -     * if the CMS has configured DNS records for the datapath.
> -     */
> -    if (ls_has_dns_records(od->nbs)) {
> -        const char *actions = has_stateful ? "ct_commit; next;" : "next;";
> -        ovn_lflow_add(
> -            lflows, od, S_SWITCH_OUT_ACL, 34000, "udp.src == 53",
> -            actions);
> -    }
> -
> -
>       if (od->has_acls || od->has_lb_vip) {
>           /* Add a 34000 priority flow to advance the service monitor reply
>           * packets to skip applying ingress ACLs. */
>           ovn_lflow_add(lflows, od, S_SWITCH_IN_ACL, 34000,
> -                    "eth.dst == $svc_monitor_mac", "next;");
> +                      "eth.dst == $svc_monitor_mac", "next;");
>   
>           /* Add a 34000 priority flow to advance the service monitor packets
>           * generated by ovn-controller to skip applying egress ACLs. */
>           ovn_lflow_add(lflows, od, S_SWITCH_OUT_ACL, 34000,
> -                    "eth.src == $svc_monitor_mac", "next;");
> +                      "eth.src == $svc_monitor_mac", "next;");
>       }
>   }
>   
>   static void
> -build_qos(struct ovn_datapath *od, struct hmap *lflows) {
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_MARK, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_MARK, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_QOS_METER, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_QOS_METER, 0, "1", "next;");
> -
> +build_qos(struct ovn_datapath *od, struct hmap *lflows)
> +{
>       for (size_t i = 0; i < od->nbs->n_qos_rules; i++) {
>           struct nbrec_qos *qos = od->nbs->qos_rules[i];
>           bool ingress = !strcmp(qos->direction, "from-lport") ? true :false;
> @@ -6046,8 +4979,15 @@ build_lb_rules(struct ovn_datapath *od, struct hmap *lflows,
>       struct ds match = DS_EMPTY_INITIALIZER;
>   
>       for (size_t i = 0; i < lb->n_vips; i++) {
> -        struct ovn_lb_vip *lb_vip = &lb->vips[i];
>           struct ovn_northd_lb_vip *lb_vip_nb = &lb->vips_nb[i];
> +        if (!lb_vip_nb->lb_health_check) {
> +            /* Only add lflows if health check is configured on the vip.
> +             * ovn-controller will add the lflows if no health check
> +             * configured. */
> +            continue;
> +        }
> +
> +        struct ovn_lb_vip *lb_vip = &lb->vips[i];
>           const char *ip_match = NULL;
>   
>           ds_clear(&action);
> @@ -6108,22 +5048,6 @@ build_lb_rules(struct ovn_datapath *od, struct hmap *lflows,
>   static void
>   build_stateful(struct ovn_datapath *od, struct hmap *lflows, struct hmap *lbs)
>   {
> -    /* Ingress and Egress stateful Table (Priority 0): Packets are
> -     * allowed by default. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_STATEFUL, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_STATEFUL, 0, "1", "next;");
> -
> -    /* If REGBIT_CONNTRACK_COMMIT is set as 1, then the packets should be
> -     * committed to conntrack. We always set ct_label.blocked to 0 here as
> -     * any packet that makes it this far is part of a connection we
> -     * want to allow to continue. */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_STATEFUL, 100,
> -                  REGBIT_CONNTRACK_COMMIT" == 1",
> -                  "ct_commit { ct_label.blocked = 0; }; next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_OUT_STATEFUL, 100,
> -                  REGBIT_CONNTRACK_COMMIT" == 1",
> -                  "ct_commit { ct_label.blocked = 0; }; next;");
> -
>       /* Load balancing rules for new connections get committed to conntrack
>        * table.  So even if REGBIT_CONNTRACK_COMMIT is set in a previous table
>        * a higher priority rule for load balancing below also commits the
> @@ -6138,63 +5062,6 @@ build_stateful(struct ovn_datapath *od, struct hmap *lflows, struct hmap *lbs)
>       }
>   }
>   
> -static void
> -build_lb_hairpin(struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    /* Ingress Pre-Hairpin/Nat-Hairpin/Hairpin tabled (Priority 0).
> -     * Packets that don't need hairpinning should continue processing.
> -     */
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_PRE_HAIRPIN, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_NAT_HAIRPIN, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_SWITCH_IN_HAIRPIN, 0, "1", "next;");
> -
> -    if (od->has_lb_vip) {
> -        /* Check if the packet needs to be hairpinned.
> -         * Set REGBIT_HAIRPIN in the original direction and
> -         * REGBIT_HAIRPIN_REPLY in the reply direction.
> -         */
> -        ovn_lflow_add_with_hint(
> -            lflows, od, S_SWITCH_IN_PRE_HAIRPIN, 100, "ip && ct.trk",
> -            REGBIT_HAIRPIN " = chk_lb_hairpin(); "
> -            REGBIT_HAIRPIN_REPLY " = chk_lb_hairpin_reply(); "
> -            "next;",
> -            &od->nbs->header_);
> -
> -        /* If packet needs to be hairpinned, snat the src ip with the VIP
> -         * for new sessions. */
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_NAT_HAIRPIN, 100,
> -                                "ip && ct.new && ct.trk"
> -                                " && "REGBIT_HAIRPIN " == 1",
> -                                "ct_snat_to_vip; next;",
> -                                &od->nbs->header_);
> -
> -        /* If packet needs to be hairpinned, for established sessions there
> -         * should already be an SNAT conntrack entry.
> -         */
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_NAT_HAIRPIN, 100,
> -                                "ip && ct.est && ct.trk"
> -                                " && "REGBIT_HAIRPIN " == 1",
> -                                "ct_snat;",
> -                                &od->nbs->header_);
> -
> -        /* For the reply of hairpinned traffic, snat the src ip to the VIP. */
> -        ovn_lflow_add_with_hint(lflows, od, S_SWITCH_IN_NAT_HAIRPIN, 90,
> -                                "ip && "REGBIT_HAIRPIN_REPLY " == 1",
> -                                "ct_snat;",
> -                                &od->nbs->header_);
> -
> -        /* Ingress Hairpin table.
> -        * - Priority 1: Packets that were SNAT-ed for hairpinning should be
> -        *   looped back (i.e., swap ETH addresses and send back on inport).
> -        */
> -        ovn_lflow_add(
> -            lflows, od, S_SWITCH_IN_HAIRPIN, 1,
> -            "("REGBIT_HAIRPIN " == 1 || " REGBIT_HAIRPIN_REPLY " == 1)",
> -            "eth.dst <-> eth.src; outport = inport; flags.loopback = 1; "
> -            "output;");
> -    }
> -}
> -
>   /* Build logical flows for the forwarding groups */
>   static void
>   build_fwd_group_lflows(struct ovn_datapath *od, struct hmap *lflows)
> @@ -6848,47 +5715,6 @@ build_drop_arp_nd_flows_for_unbound_router_ports(struct ovn_port *op,
>       ds_destroy(&match);
>   }
>   
> -static bool
> -is_vlan_transparent(const struct ovn_datapath *od)
> -{
> -    return smap_get_bool(&od->nbs->other_config, "vlan-passthru", false);
> -}
> -
> -static void
> -build_lswitch_flows(struct hmap *datapaths, struct hmap *lflows)
> -{
> -    /* This flow table structure is documented in ovn-northd(8), so please
> -     * update ovn-northd.8.xml if you change anything. */
> -
> -    struct ds match = DS_EMPTY_INITIALIZER;
> -    struct ds actions = DS_EMPTY_INITIALIZER;
> -    struct ovn_datapath *od;
> -
> -    /* Ingress table 23: Destination lookup for unknown MACs (priority 0). */
> -    HMAP_FOR_EACH (od, key_node, datapaths) {
> -        if (!od->nbs) {
> -            continue;
> -        }
> -
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_LKUP, 0, "1",
> -                      "outport = get_fdb(eth.dst); next;");
> -
> -        if (od->has_unknown) {
> -            ovn_lflow_add_unique(lflows, od, S_SWITCH_IN_L2_UNKNOWN, 50,
> -                                 "outport == \"none\"",
> -                                 "outport = \""MC_UNKNOWN "\"; output;");
> -        } else {
> -            ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_UNKNOWN, 50,
> -                          "outport == \"none\"", "drop;");
> -        }
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_UNKNOWN, 0, "1",
> -                      "output;");
> -    }
> -
> -    ds_destroy(&match);
> -    ds_destroy(&actions);
> -}
> -
>   /* Build pre-ACL and ACL tables for both ingress and egress.
>    * Ingress tables 3 through 10.  Egress tables 0 through 7. */
>   static void
> @@ -6904,238 +5730,9 @@ build_lswitch_lflows_pre_acl_and_acl(struct ovn_datapath *od,
>   
>           build_pre_acls(od, port_groups, lflows);
>           build_pre_lb(od, lflows, meter_groups, lbs);
> -        build_pre_stateful(od, lflows);
> -        build_acl_hints(od, lflows);
>           build_acls(od, lflows, port_groups, meter_groups);
>           build_qos(od, lflows);
>           build_stateful(od, lflows, lbs);
> -        build_lb_hairpin(od, lflows);
> -    }
> -}
> -
> -/* Logical switch ingress table 0: Admission control framework (priority
> - * 100). */
> -static void
> -build_lswitch_lflows_admission_control(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> -{
> -    if (od->nbs) {
> -        /* Logical VLANs not supported. */
> -        if (!is_vlan_transparent(od)) {
> -            /* Block logical VLANs. */
> -            ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_L2, 100,
> -                          "vlan.present", "drop;");
> -        }
> -
> -        /* Broadcast/multicast source address is invalid. */
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_PORT_SEC_L2, 100, "eth.src[40]",
> -                      "drop;");
> -
> -        /* Port security flows have priority 50
> -         * (see build_lswitch_input_port_sec()) and will continue
> -         * to the next table if packet source is acceptable. */
> -    }
> -}
> -
> -/* Ingress table 13: ARP/ND responder, skip requests coming from localnet
> - * and vtep ports. (priority 100); see ovn-northd.8.xml for the
> - * rationale. */
> -
> -static void
> -build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,
> -                                          struct hmap *lflows,
> -                                          struct ds *match)
> -{
> -    if (op->nbsp) {
> -        if ((!strcmp(op->nbsp->type, "localnet")) ||
> -            (!strcmp(op->nbsp->type, "vtep"))) {
> -            ds_clear(match);
> -            ds_put_format(match, "inport == %s", op->json_key);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP,
> -                                    100, ds_cstr(match), "next;",
> -                                    &op->nbsp->header_);
> -        }
> -    }
> -}
> -
> -/* Ingress table 13: ARP/ND responder, reply for known IPs.
> - * (priority 50). */
> -static void
> -build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,
> -                                         struct hmap *lflows,
> -                                         struct hmap *ports,
> -                                         struct ds *actions,
> -                                         struct ds *match)
> -{
> -    if (op->nbsp) {
> -        if (!strcmp(op->nbsp->type, "virtual")) {
> -            /* Handle
> -             *  - GARPs for virtual ip which belongs to a logical port
> -             *    of type 'virtual' and bind that port.
> -             *
> -             *  - ARP reply from the virtual ip which belongs to a logical
> -             *    port of type 'virtual' and bind that port.
> -             * */
> -            ovs_be32 ip;
> -            const char *virtual_ip = smap_get(&op->nbsp->options,
> -                                              "virtual-ip");
> -            const char *virtual_parents = smap_get(&op->nbsp->options,
> -                                                   "virtual-parents");
> -            if (!virtual_ip || !virtual_parents ||
> -                !ip_parse(virtual_ip, &ip)) {
> -                return;
> -            }
> -
> -            char *tokstr = xstrdup(virtual_parents);
> -            char *save_ptr = NULL;
> -            char *vparent;
> -            for (vparent = strtok_r(tokstr, ",", &save_ptr); vparent != NULL;
> -                 vparent = strtok_r(NULL, ",", &save_ptr)) {
> -                struct ovn_port *vp = ovn_port_find(ports, vparent);
> -                if (!vp || vp->od != op->od) {
> -                    /* vparent name should be valid and it should belong
> -                     * to the same logical switch. */
> -                    continue;
> -                }
> -
> -                ds_clear(match);
> -                ds_put_format(match, "inport == \"%s\" && "
> -                              "((arp.op == 1 && arp.spa == %s && "
> -                              "arp.tpa == %s) || (arp.op == 2 && "
> -                              "arp.spa == %s))",
> -                              vparent, virtual_ip, virtual_ip,
> -                              virtual_ip);
> -                ds_clear(actions);
> -                ds_put_format(actions,
> -                    "bind_vport(%s, inport); "
> -                    "next;",
> -                    op->json_key);
> -                ovn_lflow_add_with_hint(lflows, op->od,
> -                                        S_SWITCH_IN_ARP_ND_RSP, 100,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &vp->nbsp->header_);
> -            }
> -
> -            free(tokstr);
> -        } else {
> -            /*
> -             * Add ARP/ND reply flows if either the
> -             *  - port is up and it doesn't have 'unknown' address defined or
> -             *  - port type is router or
> -             *  - port type is localport
> -             */
> -            if (check_lsp_is_up &&
> -                !lsp_is_up(op->nbsp) && !lsp_is_router(op->nbsp) &&
> -                strcmp(op->nbsp->type, "localport")) {
> -                return;
> -            }
> -
> -            if (lsp_is_external(op->nbsp) || op->has_unknown) {
> -                return;
> -            }
> -
> -            if (is_vlan_transparent(op->od)) {
> -                return;
> -            }
> -
> -            for (size_t i = 0; i < op->n_lsp_addrs; i++) {
> -                for (size_t j = 0; j < op->lsp_addrs[i].n_ipv4_addrs; j++) {
> -                    ds_clear(match);
> -                    ds_put_format(match, "arp.tpa == %s && arp.op == 1",
> -                                op->lsp_addrs[i].ipv4_addrs[j].addr_s);
> -                    ds_clear(actions);
> -                    ds_put_format(actions,
> -                        "eth.dst = eth.src; "
> -                        "eth.src = %s; "
> -                        "arp.op = 2; /* ARP reply */ "
> -                        "arp.tha = arp.sha; "
> -                        "arp.sha = %s; "
> -                        "arp.tpa = arp.spa; "
> -                        "arp.spa = %s; "
> -                        "outport = inport; "
> -                        "flags.loopback = 1; "
> -                        "output;",
> -                        op->lsp_addrs[i].ea_s, op->lsp_addrs[i].ea_s,
> -                        op->lsp_addrs[i].ipv4_addrs[j].addr_s);
> -                    ovn_lflow_add_with_hint(lflows, op->od,
> -                                            S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                            ds_cstr(match),
> -                                            ds_cstr(actions),
> -                                            &op->nbsp->header_);
> -
> -                    /* Do not reply to an ARP request from the port that owns
> -                     * the address (otherwise a DHCP client that ARPs to check
> -                     * for a duplicate address will fail).  Instead, forward
> -                     * it the usual way.
> -                     *
> -                     * (Another alternative would be to simply drop the packet.
> -                     * If everything is working as it is configured, then this
> -                     * would produce equivalent results, since no one should
> -                     * reply to the request.  But ARPing for one's own IP
> -                     * address is intended to detect situations where the
> -                     * network is not working as configured, so dropping the
> -                     * request would frustrate that intent.) */
> -                    ds_put_format(match, " && inport == %s", op->json_key);
> -                    ovn_lflow_add_with_hint(lflows, op->od,
> -                                            S_SWITCH_IN_ARP_ND_RSP, 100,
> -                                            ds_cstr(match), "next;",
> -                                            &op->nbsp->header_);
> -                }
> -
> -                /* For ND solicitations, we need to listen for both the
> -                 * unicast IPv6 address and its all-nodes multicast address,
> -                 * but always respond with the unicast IPv6 address. */
> -                for (size_t j = 0; j < op->lsp_addrs[i].n_ipv6_addrs; j++) {
> -                    ds_clear(match);
> -                    ds_put_format(match,
> -                            "nd_ns && ip6.dst == {%s, %s} && nd.target == %s",
> -                            op->lsp_addrs[i].ipv6_addrs[j].addr_s,
> -                            op->lsp_addrs[i].ipv6_addrs[j].sn_addr_s,
> -                            op->lsp_addrs[i].ipv6_addrs[j].addr_s);
> -
> -                    ds_clear(actions);
> -                    ds_put_format(actions,
> -                            "%s { "
> -                            "eth.src = %s; "
> -                            "ip6.src = %s; "
> -                            "nd.target = %s; "
> -                            "nd.tll = %s; "
> -                            "outport = inport; "
> -                            "flags.loopback = 1; "
> -                            "output; "
> -                            "};",
> -                            lsp_is_router(op->nbsp) ? "nd_na_router" : "nd_na",
> -                            op->lsp_addrs[i].ea_s,
> -                            op->lsp_addrs[i].ipv6_addrs[j].addr_s,
> -                            op->lsp_addrs[i].ipv6_addrs[j].addr_s,
> -                            op->lsp_addrs[i].ea_s);
> -                    ovn_lflow_add_with_hint(lflows, op->od,
> -                                            S_SWITCH_IN_ARP_ND_RSP, 50,
> -                                            ds_cstr(match),
> -                                            ds_cstr(actions),
> -                                            &op->nbsp->header_);
> -
> -                    /* Do not reply to a solicitation from the port that owns
> -                     * the address (otherwise DAD detection will fail). */
> -                    ds_put_format(match, " && inport == %s", op->json_key);
> -                    ovn_lflow_add_with_hint(lflows, op->od,
> -                                            S_SWITCH_IN_ARP_ND_RSP, 100,
> -                                            ds_cstr(match), "next;",
> -                                            &op->nbsp->header_);
> -                }
> -            }
> -        }
> -    }
> -}
> -
> -/* Ingress table 13: ARP/ND responder, by default goto next.
> - * (priority 0)*/
> -static void
> -build_lswitch_arp_nd_responder_default(struct ovn_datapath *od,
> -                                       struct hmap *lflows)
> -{
> -    if (od->nbs) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_ARP_ND_RSP, 0, "1", "next;");
>       }
>   }
>   
> @@ -7236,51 +5833,6 @@ build_lswitch_dhcp_options_and_response(struct ovn_port *op,
>       }
>   }
>   
> -/* Ingress table 14 and 15: DHCP options and response, by default goto
> - * next. (priority 0).
> - * Ingress table 16 and 17: DNS lookup and response, by default goto next.
> - * (priority 0).
> - * Ingress table 18 - External port handling, by default goto next.
> - * (priority 0). */
> -static void
> -build_lswitch_dhcp_and_dns_defaults(struct ovn_datapath *od,
> -                                        struct hmap *lflows)
> -{
> -    if (od->nbs) {
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_OPTIONS, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DHCP_RESPONSE, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_LOOKUP, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_EXTERNAL_PORT, 0, "1", "next;");
> -    }
> -}
> -
> -/* Logical switch ingress table 17 and 18: DNS lookup and response
> -* priority 100 flows.
> -*/
> -static void
> -build_lswitch_dns_lookup_and_response(struct ovn_datapath *od,
> -                                      struct hmap *lflows)
> -{
> -    if (od->nbs && ls_has_dns_records(od->nbs)) {
> -
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_LOOKUP, 100,
> -                      "udp.dst == 53",
> -                      REGBIT_DNS_LOOKUP_RESULT" = dns_lookup(); next;");
> -        const char *dns_action = "eth.dst <-> eth.src; ip4.src <-> ip4.dst; "
> -                      "udp.dst = udp.src; udp.src = 53; outport = inport; "
> -                      "flags.loopback = 1; output;";
> -        const char *dns_match = "udp.dst == 53 && "REGBIT_DNS_LOOKUP_RESULT;
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 100,
> -                      dns_match, dns_action);
> -        dns_action = "eth.dst <-> eth.src; ip6.src <-> ip6.dst; "
> -                      "udp.dst = udp.src; udp.src = 53; outport = inport; "
> -                      "flags.loopback = 1; output;";
> -        ovn_lflow_add(lflows, od, S_SWITCH_IN_DNS_RESPONSE, 100,
> -                      dns_match, dns_action);
> -    }
> -}
> -
>   /* Table 18: External port. Drop ARP request for router ips from
>    * external ports  on chassis not binding those ports.
>    * This makes the router pipeline to be run only on the chassis
> @@ -7491,16 +6043,7 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op,
>               struct eth_addr mac;
>               if (ovs_scan(op->nbsp->addresses[i],
>                           ETH_ADDR_SCAN_FMT, ETH_ADDR_SCAN_ARGS(mac))) {
> -                ds_clear(match);
> -                ds_put_format(match, "eth.dst == "ETH_ADDR_FMT,
> -                              ETH_ADDR_ARGS(mac));
> -
> -                ds_clear(actions);
> -                ds_put_format(actions, "outport = %s; output;", op->json_key);
> -                ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                        50, ds_cstr(match),
> -                                        ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                /* Do nothing. */
>               } else if (!strcmp(op->nbsp->addresses[i], "unknown")) {
>                   if (lsp_is_enabled(op->nbsp)) {
>                       ovs_mutex_lock(&mcgroup_mutex);
> @@ -7509,21 +6052,7 @@ build_lswitch_ip_unicast_lookup(struct ovn_port *op,
>                       op->od->has_unknown = true;
>                   }
>               } else if (is_dynamic_lsp_address(op->nbsp->addresses[i])) {
> -                if (!op->nbsp->dynamic_addresses
> -                    || !ovs_scan(op->nbsp->dynamic_addresses,
> -                            ETH_ADDR_SCAN_FMT, ETH_ADDR_SCAN_ARGS(mac))) {
> -                    continue;
> -                }
> -                ds_clear(match);
> -                ds_put_format(match, "eth.dst == "ETH_ADDR_FMT,
> -                              ETH_ADDR_ARGS(mac));
> -
> -                ds_clear(actions);
> -                ds_put_format(actions, "outport = %s; output;", op->json_key);
> -                ovn_lflow_add_with_hint(lflows, op->od, S_SWITCH_IN_L2_LKUP,
> -                                        50, ds_cstr(match),
> -                                        ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +                /* Do nothing. */
>               } else if (!strcmp(op->nbsp->addresses[i], "router")) {
>                   if (!op->peer || !op->peer->nbrp
>                       || !ovs_scan(op->peer->nbrp->mac,
> @@ -8653,43 +7182,6 @@ build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
>       free(prefix_s);
>   }
>   
> -static void
> -op_put_v4_networks(struct ds *ds, const struct ovn_port *op, bool add_bcast)
> -{
> -    if (!add_bcast && op->lrp_networks.n_ipv4_addrs == 1) {
> -        ds_put_format(ds, "%s", op->lrp_networks.ipv4_addrs[0].addr_s);
> -        return;
> -    }
> -
> -    ds_put_cstr(ds, "{");
> -    for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -        ds_put_format(ds, "%s, ", op->lrp_networks.ipv4_addrs[i].addr_s);
> -        if (add_bcast) {
> -            ds_put_format(ds, "%s, ", op->lrp_networks.ipv4_addrs[i].bcast_s);
> -        }
> -    }
> -    ds_chomp(ds, ' ');
> -    ds_chomp(ds, ',');
> -    ds_put_cstr(ds, "}");
> -}
> -
> -static void
> -op_put_v6_networks(struct ds *ds, const struct ovn_port *op)
> -{
> -    if (op->lrp_networks.n_ipv6_addrs == 1) {
> -        ds_put_format(ds, "%s", op->lrp_networks.ipv6_addrs[0].addr_s);
> -        return;
> -    }
> -
> -    ds_put_cstr(ds, "{");
> -    for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -        ds_put_format(ds, "%s, ", op->lrp_networks.ipv6_addrs[i].addr_s);
> -    }
> -    ds_chomp(ds, ' ');
> -    ds_chomp(ds, ',');
> -    ds_put_cstr(ds, "}");
> -}
> -
>   static bool
>   get_force_snat_ip(struct ovn_datapath *od, const char *key_type,
>                     struct lport_addresses *laddrs)
> @@ -8845,7 +7337,6 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>                          struct ds *actions)
>   {
>       /* A set to hold all ips that need defragmentation and tracking. */
> -    struct sset all_ips = SSET_INITIALIZER(&all_ips);
>       bool lb_force_snat_ip =
>           !lport_addresses_is_empty(&od->lb_force_snat_addrs);
>   
> @@ -8856,10 +7347,6 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>           ovs_assert(lb);
>   
>           bool lb_skip_snat = smap_get_bool(&nb_lb->options, "skip_snat", false);
> -        if (lb_skip_snat) {
> -            ovn_lflow_add(lflows, od, S_ROUTER_OUT_SNAT, 120,
> -                          "flags.skip_snat_for_lb == 1 && ip", "next;");
> -        }
>   
>           for (size_t j = 0; j < lb->n_vips; j++) {
>               struct ovn_lb_vip *lb_vip = &lb->vips[j];
> @@ -8868,29 +7355,6 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>               build_lb_vip_actions(lb_vip, lb_vip_nb, actions,
>                                    lb->selection_fields, false);
>   
> -            if (!sset_contains(&all_ips, lb_vip->vip_str)) {
> -                sset_add(&all_ips, lb_vip->vip_str);
> -                /* If there are any load balancing rules, we should send
> -                 * the packet to conntrack for defragmentation and
> -                 * tracking.  This helps with two things.
> -                 *
> -                 * 1. With tracking, we can send only new connections to
> -                 *    pick a DNAT ip address from a group.
> -                 * 2. If there are L4 ports in load balancing rules, we
> -                 *    need the defragmentation to match on L4 ports. */
> -                ds_clear(match);
> -                if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> -                    ds_put_format(match, "ip && ip4.dst == %s",
> -                                  lb_vip->vip_str);
> -                } else {
> -                    ds_put_format(match, "ip && ip6.dst == %s",
> -                                  lb_vip->vip_str);
> -                }
> -                ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_DEFRAG,
> -                                        100, ds_cstr(match), "ct_next;",
> -                                        &nb_lb->header_);
> -            }
> -
>               /* Higher priority rules are added for load-balancing in DNAT
>                * table.  For every match (on a VIP[:port]), we add two flows
>                * via add_router_lb_flow().  One flow is for specific matching
> @@ -8934,7 +7398,6 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>                                  meter_groups, nat_entries);
>           }
>       }
> -    sset_destroy(&all_ips);
>   }
>   
>   #define ND_RA_MAX_INTERVAL_MAX 1800
> @@ -8944,13 +7407,12 @@ build_lrouter_lb_flows(struct hmap *lflows, struct ovn_datapath *od,
>   #define ND_RA_MIN_INTERVAL_MIN 3
>   
>   static void
> -copy_ra_to_sb(struct ovn_port *op, const char *address_mode)
> +copy_ra_to_sb(const struct ovn_port *op)
>   {
>       struct smap options;
>       smap_clone(&options, &op->sb->options);
>   
>       smap_add(&options, "ipv6_ra_send_periodic", "true");
> -    smap_add(&options, "ipv6_ra_address_mode", address_mode);
>   
>       int max_interval = smap_get_int(&op->nbrp->ipv6_ra_configs,
>               "max_interval", ND_RA_MAX_INTERVAL_DEFAULT);
> @@ -9389,359 +7851,7 @@ build_lrouter_force_snat_flows(struct hmap *lflows, struct ovn_datapath *od,
>   }
>   
>   static void
> -build_lrouter_force_snat_flows_op(struct ovn_port *op,
> -                                  struct hmap *lflows,
> -                                  struct ds *match, struct ds *actions)
> -{
> -    if (!op->nbrp || !op->peer || !op->od->lb_force_snat_router_ip) {
> -        return;
> -    }
> -
> -    if (op->lrp_networks.n_ipv4_addrs) {
> -        ds_clear(match);
> -        ds_clear(actions);
> -
> -        ds_put_format(match, "inport == %s && ip4.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv4_addrs[0].addr_s);
> -        ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
> -                      ds_cstr(match), "ct_snat;");
> -
> -        ds_clear(match);
> -
> -        /* Higher priority rules to force SNAT with the router port ip.
> -         * This only takes effect when the packet has already been
> -         * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> -                      "outport == %s", op->json_key);
> -        ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv4_addrs[0].addr_s);
> -        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
> -                      ds_cstr(match), ds_cstr(actions));
> -        if (op->lrp_networks.n_ipv4_addrs > 1) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv4 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv4_addrs[0].addr_s);
> -        }
> -    }
> -
> -    /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
> -     * last in the list. So add the flows only if n_ipv6_addrs > 1. */
> -    if (op->lrp_networks.n_ipv6_addrs > 1) {
> -        ds_clear(match);
> -        ds_clear(actions);
> -
> -        ds_put_format(match, "inport == %s && ip6.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv6_addrs[0].addr_s);
> -        ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
> -                      ds_cstr(match), "ct_snat;");
> -
> -        ds_clear(match);
> -
> -        /* Higher priority rules to force SNAT with the router port ip.
> -         * This only takes effect when the packet has already been
> -         * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> -                      "outport == %s", op->json_key);
> -        ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv6_addrs[0].addr_s);
> -        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
> -                      ds_cstr(match), ds_cstr(actions));
> -        if (op->lrp_networks.n_ipv6_addrs > 2) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv6 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv6_addrs[0].addr_s);
> -        }
> -    }
> -}
> -
> -static void
> -build_lrouter_bfd_flows(struct hmap *lflows, struct ovn_port *op)
> -{
> -    if (!op->has_bfd) {
> -        return;
> -    }
> -
> -    struct ds ip_list = DS_EMPTY_INITIALIZER;
> -    struct ds match = DS_EMPTY_INITIALIZER;
> -
> -    if (op->lrp_networks.n_ipv4_addrs) {
> -        op_put_v4_networks(&ip_list, op, false);
> -        ds_put_format(&match, "ip4.src == %s && udp.dst == 3784",
> -                      ds_cstr(&ip_list));
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 110,
> -                                ds_cstr(&match), "next; ",
> -                                &op->nbrp->header_);
> -        ds_clear(&match);
> -        ds_put_format(&match, "ip4.dst == %s && udp.dst == 3784",
> -                      ds_cstr(&ip_list));
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 110,
> -                                ds_cstr(&match), "handle_bfd_msg(); ",
> -                                &op->nbrp->header_);
> -    }
> -    if (op->lrp_networks.n_ipv6_addrs) {
> -        ds_clear(&ip_list);
> -        ds_clear(&match);
> -
> -        op_put_v6_networks(&ip_list, op);
> -        ds_put_format(&match, "ip6.src == %s && udp.dst == 3784",
> -                      ds_cstr(&ip_list));
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 110,
> -                                ds_cstr(&match), "next; ",
> -                                &op->nbrp->header_);
> -        ds_clear(&match);
> -        ds_put_format(&match, "ip6.dst == %s && udp.dst == 3784",
> -                      ds_cstr(&ip_list));
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 110,
> -                                ds_cstr(&match), "handle_bfd_msg(); ",
> -                                &op->nbrp->header_);
> -    }
> -
> -    ds_destroy(&ip_list);
> -    ds_destroy(&match);
> -}
> -
> -/* Logical router ingress Table 0: L2 Admission Control
> - * Generic admission control flows (without inport check).
> - */
> -static void
> -build_adm_ctrl_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    if (od->nbr) {
> -        /* Logical VLANs not supported.
> -         * Broadcast/multicast source address is invalid. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ADMISSION, 100,
> -                      "vlan.present || eth.src[40]", "drop;");
> -    }
> -}
> -
> -/* Logical router ingress Table 0: L2 Admission Control
> - * This table drops packets that the router shouldn’t see at all based
> - * on their Ethernet headers.
> - */
> -static void
> -build_adm_ctrl_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (op->nbrp) {
> -        if (!lrport_is_enabled(op->nbrp)) {
> -            /* Drop packets from disabled logical ports (since logical flow
> -             * tables are default-drop). */
> -            return;
> -        }
> -
> -        if (op->derived) {
> -            /* No ingress packets should be received on a chassisredirect
> -             * port. */
> -            return;
> -        }
> -
> -        /* Store the ethernet address of the port receiving the packet.
> -         * This will save us from having to match on inport further down in
> -         * the pipeline.
> -         */
> -        ds_clear(actions);
> -        ds_put_format(actions, REG_INPORT_ETH_ADDR " = %s; next;",
> -                      op->lrp_networks.ea_s);
> -
> -        ds_clear(match);
> -        ds_put_format(match, "eth.mcast && inport == %s", op->json_key);
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_ADMISSION, 50,
> -                                ds_cstr(match), ds_cstr(actions),
> -                                &op->nbrp->header_);
> -
> -        ds_clear(match);
> -        ds_put_format(match, "eth.dst == %s && inport == %s",
> -                      op->lrp_networks.ea_s, op->json_key);
> -        if (op->od->l3dgw_port && op == op->od->l3dgw_port
> -            && op->od->l3redirect_port) {
> -            /* Traffic with eth.dst = l3dgw_port->lrp_networks.ea_s
> -             * should only be received on the gateway chassis. */
> -            ds_put_format(match, " && is_chassis_resident(%s)",
> -                          op->od->l3redirect_port->json_key);
> -        }
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_ADMISSION, 50,
> -                                ds_cstr(match),  ds_cstr(actions),
> -                                &op->nbrp->header_);
> -    }
> -}
> -
> -
> -/* Logical router ingress Table 1 and 2: Neighbor lookup and learning
> - * lflows for logical routers. */
> -static void
> -build_neigh_learning_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (od->nbr) {
> -
> -        /* Learn MAC bindings from ARP/IPv6 ND.
> -         *
> -         * For ARP packets, table LOOKUP_NEIGHBOR does a lookup for the
> -         * (arp.spa, arp.sha) in the mac binding table using the 'lookup_arp'
> -         * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_RESULT bit.
> -         * If "always_learn_from_arp_request" is set to false, it will also
> -         * lookup for the (arp.spa) in the mac binding table using the
> -         * "lookup_arp_ip" action for ARP request packets, and stores the
> -         * result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit; or set that bit
> -         * to "1" directly for ARP response packets.
> -         *
> -         * For IPv6 ND NA packets, table LOOKUP_NEIGHBOR does a lookup
> -         * for the (nd.target, nd.tll) in the mac binding table using the
> -         * 'lookup_nd' action and stores the result in
> -         * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If
> -         * "always_learn_from_arp_request" is set to false,
> -         * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT bit is set.
> -         *
> -         * For IPv6 ND NS packets, table LOOKUP_NEIGHBOR does a lookup
> -         * for the (ip6.src, nd.sll) in the mac binding table using the
> -         * 'lookup_nd' action and stores the result in
> -         * REGBIT_LOOKUP_NEIGHBOR_RESULT bit. If
> -         * "always_learn_from_arp_request" is set to false, it will also lookup
> -         * for the (ip6.src) in the mac binding table using the "lookup_nd_ip"
> -         * action and stores the result in REGBIT_LOOKUP_NEIGHBOR_IP_RESULT
> -         * bit.
> -         *
> -         * Table LEARN_NEIGHBOR learns the mac-binding using the action
> -         * - 'put_arp/put_nd'. Learning mac-binding is skipped if
> -         *   REGBIT_LOOKUP_NEIGHBOR_RESULT bit is set or
> -         *   REGBIT_LOOKUP_NEIGHBOR_IP_RESULT is not set.
> -         *
> -         * */
> -
> -        /* Flows for LOOKUP_NEIGHBOR. */
> -        bool learn_from_arp_request = smap_get_bool(&od->nbr->options,
> -            "always_learn_from_arp_request", true);
> -        ds_clear(actions);
> -        ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> -                      " = lookup_arp(inport, arp.spa, arp.sha); %snext;",
> -                      learn_from_arp_request ? "" :
> -                      REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100,
> -                      "arp.op == 2", ds_cstr(actions));
> -
> -        ds_clear(actions);
> -        ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> -                      " = lookup_nd(inport, nd.target, nd.tll); %snext;",
> -                      learn_from_arp_request ? "" :
> -                      REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1; ");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "nd_na",
> -                      ds_cstr(actions));
> -
> -        ds_clear(actions);
> -        ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> -                      " = lookup_nd(inport, ip6.src, nd.sll); %snext;",
> -                      learn_from_arp_request ? "" :
> -                      REGBIT_LOOKUP_NEIGHBOR_IP_RESULT
> -                      " = lookup_nd_ip(inport, ip6.src); ");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 100, "nd_ns",
> -                      ds_cstr(actions));
> -
> -        /* For other packet types, we can skip neighbor learning.
> -         * So set REGBIT_LOOKUP_NEIGHBOR_RESULT to 1. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LOOKUP_NEIGHBOR, 0, "1",
> -                      REGBIT_LOOKUP_NEIGHBOR_RESULT" = 1; next;");
> -
> -        /* Flows for LEARN_NEIGHBOR. */
> -        /* Skip Neighbor learning if not required. */
> -        ds_clear(match);
> -        ds_put_format(match, REGBIT_LOOKUP_NEIGHBOR_RESULT" == 1%s",
> -                      learn_from_arp_request ? "" :
> -                      " || "REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" == 0");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LEARN_NEIGHBOR, 100,
> -                      ds_cstr(match), "next;");
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> -                      "arp", "put_arp(inport, arp.spa, arp.sha); next;");
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> -                      "nd_na", "put_nd(inport, nd.target, nd.tll); next;");
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LEARN_NEIGHBOR, 90,
> -                      "nd_ns", "put_nd(inport, ip6.src, nd.sll); next;");
> -    }
> -
> -}
> -
> -/* Logical router ingress Table 1: Neighbor lookup lflows
> - * for logical router ports. */
> -static void
> -build_neigh_learning_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (op->nbrp) {
> -
> -        bool learn_from_arp_request = smap_get_bool(&op->od->nbr->options,
> -            "always_learn_from_arp_request", true);
> -
> -        /* Check if we need to learn mac-binding from ARP requests. */
> -        for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -            if (!learn_from_arp_request) {
> -                /* ARP request to this address should always get learned,
> -                 * so add a priority-110 flow to set
> -                 * REGBIT_LOOKUP_NEIGHBOR_IP_RESULT to 1. */
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "inport == %s && arp.spa == %s/%u && "
> -                              "arp.tpa == %s && arp.op == 1",
> -                              op->json_key,
> -                              op->lrp_networks.ipv4_addrs[i].network_s,
> -                              op->lrp_networks.ipv4_addrs[i].plen,
> -                              op->lrp_networks.ipv4_addrs[i].addr_s);
> -                if (op->od->l3dgw_port && op == op->od->l3dgw_port
> -                    && op->od->l3redirect_port) {
> -                    ds_put_format(match, " && is_chassis_resident(%s)",
> -                                  op->od->l3redirect_port->json_key);
> -                }
> -                const char *actions_s = REGBIT_LOOKUP_NEIGHBOR_RESULT
> -                                  " = lookup_arp(inport, arp.spa, arp.sha); "
> -                                  REGBIT_LOOKUP_NEIGHBOR_IP_RESULT" = 1;"
> -                                  " next;";
> -                ovn_lflow_add_with_hint(lflows, op->od,
> -                                        S_ROUTER_IN_LOOKUP_NEIGHBOR, 110,
> -                                        ds_cstr(match), actions_s,
> -                                        &op->nbrp->header_);
> -            }
> -            ds_clear(match);
> -            ds_put_format(match,
> -                          "inport == %s && arp.spa == %s/%u && arp.op == 1",
> -                          op->json_key,
> -                          op->lrp_networks.ipv4_addrs[i].network_s,
> -                          op->lrp_networks.ipv4_addrs[i].plen);
> -            if (op->od->l3dgw_port && op == op->od->l3dgw_port
> -                && op->od->l3redirect_port) {
> -                ds_put_format(match, " && is_chassis_resident(%s)",
> -                              op->od->l3redirect_port->json_key);
> -            }
> -            ds_clear(actions);
> -            ds_put_format(actions, REGBIT_LOOKUP_NEIGHBOR_RESULT
> -                          " = lookup_arp(inport, arp.spa, arp.sha); %snext;",
> -                          learn_from_arp_request ? "" :
> -                          REGBIT_LOOKUP_NEIGHBOR_IP_RESULT
> -                          " = lookup_arp_ip(inport, arp.spa); ");
> -            ovn_lflow_add_with_hint(lflows, op->od,
> -                                    S_ROUTER_IN_LOOKUP_NEIGHBOR, 100,
> -                                    ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbrp->header_);
> -        }
> -    }
> -}
> -
> -/* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: IPv6 Router
> - * Adv (RA) options and response. */
> -static void
> -build_ND_RA_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> +sync_ND_RA_options_for_lrouter_port(const struct ovn_port *op)
>   {
>       if (!op->nbrp || op->nbrp->peer || !op->peer) {
>           return;
> @@ -9770,141 +7880,33 @@ build_ND_RA_flows_for_lrouter_port(
>       }
>       smap_add(&options, "ipv6_prefix",
>                ipv6_prefix ? "true" : "false");
> -    sbrec_port_binding_set_options(op->sb, &options);
> -
> -    smap_destroy(&options);
>   
>       const char *address_mode = smap_get(
>           &op->nbrp->ipv6_ra_configs, "address_mode");
>   
> -    if (!address_mode) {
> -        return;
> -    }
> -    if (strcmp(address_mode, "slaac") &&
> -        strcmp(address_mode, "dhcpv6_stateful") &&
> -        strcmp(address_mode, "dhcpv6_stateless")) {
> -        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -        VLOG_WARN_RL(&rl, "Invalid address mode [%s] defined",
> -                     address_mode);
> -        return;
> -    }
> -
> -    if (smap_get_bool(&op->nbrp->ipv6_ra_configs, "send_periodic",
> -                      false)) {
> -        copy_ra_to_sb(op, address_mode);
> -    }
> -
> -    ds_clear(match);
> -    ds_put_format(match, "inport == %s && ip6.dst == ff02::2 && nd_rs",
> -                          op->json_key);
> -    ds_clear(actions);
> -
> -    const char *mtu_s = smap_get(
> -        &op->nbrp->ipv6_ra_configs, "mtu");
> -
> -    /* As per RFC 2460, 1280 is minimum IPv6 MTU. */
> -    uint32_t mtu = (mtu_s && atoi(mtu_s) >= 1280) ? atoi(mtu_s) : 0;
> -
> -    ds_put_format(actions, REGBIT_ND_RA_OPTS_RESULT" = put_nd_ra_opts("
> -                  "addr_mode = \"%s\", slla = %s",
> -                  address_mode, op->lrp_networks.ea_s);
> -    if (mtu > 0) {
> -        ds_put_format(actions, ", mtu = %u", mtu);
> -    }
> -
> -    const char *prf = smap_get_def(
> -        &op->nbrp->ipv6_ra_configs, "router_preference", "MEDIUM");
> -    if (strcmp(prf, "MEDIUM")) {
> -        ds_put_format(actions, ", router_preference = \"%s\"", prf);
> -    }
> -
> -    bool add_rs_response_flow = false;
> -
> -    for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -        if (in6_is_lla(&op->lrp_networks.ipv6_addrs[i].network)) {
> -            continue;
> +    if (address_mode) {
> +        if (strcmp(address_mode, "slaac") &&
> +            strcmp(address_mode, "dhcpv6_stateful") &&
> +            strcmp(address_mode, "dhcpv6_stateless")) {
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Invalid address mode [%s] defined",
> +                        address_mode);
> +            address_mode = NULL;
> +        } else {
> +            smap_add(&options, "ipv6_ra_address_mode", address_mode);
>           }
> -
> -        ds_put_format(actions, ", prefix = %s/%u",
> -                      op->lrp_networks.ipv6_addrs[i].network_s,
> -                      op->lrp_networks.ipv6_addrs[i].plen);
> -
> -        add_rs_response_flow = true;
>       }
>   
> -    if (add_rs_response_flow) {
> -        ds_put_cstr(actions, "); next;");
> -        ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_ND_RA_OPTIONS,
> -                                50, ds_cstr(match), ds_cstr(actions),
> -                                &op->nbrp->header_);
> -        ds_clear(actions);
> -        ds_clear(match);
> -        ds_put_format(match, "inport == %s && ip6.dst == ff02::2 && "
> -                      "nd_ra && "REGBIT_ND_RA_OPTS_RESULT, op->json_key);
> -
> -        char ip6_str[INET6_ADDRSTRLEN + 1];
> -        struct in6_addr lla;
> -        in6_generate_lla(op->lrp_networks.ea, &lla);
> -        memset(ip6_str, 0, sizeof(ip6_str));
> -        ipv6_string_mapped(ip6_str, &lla);
> -        ds_put_format(actions, "eth.dst = eth.src; eth.src = %s; "
> -                      "ip6.dst = ip6.src; ip6.src = %s; "
> -                      "outport = inport; flags.loopback = 1; "
> -                      "output;",
> -                      op->lrp_networks.ea_s, ip6_str);
> -        ovn_lflow_add_with_hint(lflows, op->od,
> -                                S_ROUTER_IN_ND_RA_RESPONSE, 50,
> -                                ds_cstr(match), ds_cstr(actions),
> -                                &op->nbrp->header_);
> -    }
> -}
> +    sbrec_port_binding_set_options(op->sb, &options);
> +    smap_destroy(&options);
>   
> -/* Logical router ingress table ND_RA_OPTIONS & ND_RA_RESPONSE: RS
> - * responder, by default goto next. (priority 0). */
> -static void
> -build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    if (od->nbr) {
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_OPTIONS, 0, "1", "next;");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ND_RA_RESPONSE, 0, "1", "next;");
> +    if (!address_mode) {
> +        return;
>       }
> -}
>   
> -/* Logical router ingress table IP_ROUTING : IP Routing.
> - *
> - * A packet that arrives at this table is an IP packet that should be
> - * routed to the address in 'ip[46].dst'.
> - *
> - * For regular routes without ECMP, table IP_ROUTING sets outport to the
> - * correct output port, eth.src to the output port's MAC address, and
> - * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 to the next-hop IP address
> - * (leaving 'ip[46].dst', the packet’s final destination, unchanged), and
> - * advances to the next table.
> - *
> - * For ECMP routes, i.e. multiple routes with same policy and prefix, table
> - * IP_ROUTING remembers ECMP group id and selects a member id, and advances
> - * to table IP_ROUTING_ECMP, which sets outport, eth.src and
> - * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 for the selected ECMP member.
> - */
> -static void
> -build_ip_routing_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows)
> -{
> -    if (op->nbrp) {
> -
> -        for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -            add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
> -                      op->lrp_networks.ipv4_addrs[i].network_s,
> -                      op->lrp_networks.ipv4_addrs[i].plen, NULL, false,
> -                      &op->nbrp->header_, false);
> -        }
> -
> -        for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -            add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
> -                      op->lrp_networks.ipv6_addrs[i].network_s,
> -                      op->lrp_networks.ipv6_addrs[i].plen, NULL, false,
> -                      &op->nbrp->header_, false);
> -        }
> +    if (smap_get_bool(&op->nbrp->ipv6_ra_configs, "send_periodic",
> +                      false)) {
> +        copy_ra_to_sb(op);
>       }
>   }
>   
> @@ -9914,9 +7916,6 @@ build_static_route_flows_for_lrouter(
>           struct hmap *ports, struct hmap *bfd_connections)
>   {
>       if (od->nbr) {
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_ECMP, 150,
> -                      REG_ECMP_GROUP_ID" == 0", "next;");
> -
>           struct hmap ecmp_groups = HMAP_INITIALIZER(&ecmp_groups);
>           struct hmap unique_routes = HMAP_INITIALIZER(&unique_routes);
>           struct ovs_list parsed_routes = OVS_LIST_INITIALIZER(&parsed_routes);
> @@ -9968,12 +7967,6 @@ build_mcast_lookup_flows_for_lrouter(
>           struct ds *match, struct ds *actions)
>   {
>       if (od->nbr) {
> -
> -        /* Drop IPv6 multicast traffic that shouldn't be forwarded,
> -         * i.e., router solicitation and router advertisement.
> -         */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING, 550,
> -                      "nd_rs || nd_ra", "drop;");
>           if (!od->mcast_info.rtr.relay) {
>               return;
>           }
> @@ -10017,429 +8010,73 @@ build_mcast_lookup_flows_for_lrouter(
>                             "};");
>           } else {
>               ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING, 450,
> -                          "ip4.mcast || ip6.mcast", "drop;");
> -        }
> -    }
> -}
> -
> -/* Logical router ingress table POLICY: Policy.
> - *
> - * A packet that arrives at this table is an IP packet that should be
> - * permitted/denied/rerouted to the address in the rule's nexthop.
> - * This table sets outport to the correct out_port,
> - * eth.src to the output port's MAC address,
> - * and REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 to the next-hop IP address
> - * (leaving 'ip[46].dst', the packet’s final destination, unchanged), and
> - * advances to the next table for ARP/ND resolution. */
> -static void
> -build_ingress_policy_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows,
> -        struct hmap *ports)
> -{
> -    if (od->nbr) {
> -        /* This is a catch-all rule. It has the lowest priority (0)
> -         * does a match-all("1") and pass-through (next) */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_POLICY, 0, "1",
> -                      REG_ECMP_GROUP_ID" = 0; next;");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_POLICY_ECMP, 150,
> -                      REG_ECMP_GROUP_ID" == 0", "next;");
> -
> -        /* Convert routing policies to flows. */
> -        uint16_t ecmp_group_id = 1;
> -        for (int i = 0; i < od->nbr->n_policies; i++) {
> -            const struct nbrec_logical_router_policy *rule
> -                = od->nbr->policies[i];
> -            bool is_ecmp_reroute =
> -                (!strcmp(rule->action, "reroute") && rule->n_nexthops > 1);
> -
> -            if (is_ecmp_reroute) {
> -                build_ecmp_routing_policy_flows(lflows, od, ports, rule,
> -                                                ecmp_group_id);
> -                ecmp_group_id++;
> -            } else {
> -                build_routing_policy_flow(lflows, od, ports, rule,
> -                                          &rule->header_);
> -            }
> -        }
> -    }
> -}
> -
> -/* Local router ingress table ARP_RESOLVE: ARP Resolution. */
> -static void
> -build_arp_resolve_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    if (od->nbr) {
> -        /* Multicast packets already have the outport set so just advance to
> -         * next table (priority 500). */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 500,
> -                      "ip4.mcast || ip6.mcast", "next;");
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 0, "ip4",
> -                      "get_arp(outport, " REG_NEXT_HOP_IPV4 "); next;");
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 0, "ip6",
> -                      "get_nd(outport, " REG_NEXT_HOP_IPV6 "); next;");
> -    }
> -}
> -
> -/* Local router ingress table ARP_RESOLVE: ARP Resolution.
> - *
> - * Any unicast packet that reaches this table is an IP packet whose
> - * next-hop IP address is in REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6
> - * (ip4.dst/ipv6.dst is the final destination).
> - * This table resolves the IP address in
> - * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 into an output port in outport and
> - * an Ethernet address in eth.dst.
> - */
> -static void
> -build_arp_resolve_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct hmap *ports,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (op->nbsp && !lsp_is_enabled(op->nbsp)) {
> -        return;
> -    }
> -
> -    if (op->nbrp) {
> -        /* This is a logical router port. If next-hop IP address in
> -         * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 matches IP address of this
> -         * router port, then the packet is intended to eventually be sent
> -         * to this logical port. Set the destination mac address using
> -         * this port's mac address.
> -         *
> -         * The packet is still in peer's logical pipeline. So the match
> -         * should be on peer's outport. */
> -        if (op->peer && op->nbrp->peer) {
> -            if (op->lrp_networks.n_ipv4_addrs) {
> -                ds_clear(match);
> -                ds_put_format(match, "outport == %s && "
> -                              REG_NEXT_HOP_IPV4 "== ",
> -                              op->peer->json_key);
> -                op_put_v4_networks(match, op, false);
> -
> -                ds_clear(actions);
> -                ds_put_format(actions, "eth.dst = %s; next;",
> -                              op->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, op->peer->od,
> -                                        S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbrp->header_);
> -            }
> -
> -            if (op->lrp_networks.n_ipv6_addrs) {
> -                ds_clear(match);
> -                ds_put_format(match, "outport == %s && "
> -                              REG_NEXT_HOP_IPV6 " == ",
> -                              op->peer->json_key);
> -                op_put_v6_networks(match, op);
> -
> -                ds_clear(actions);
> -                ds_put_format(actions, "eth.dst = %s; next;",
> -                              op->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, op->peer->od,
> -                                        S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbrp->header_);
> -            }
> -        }
> -
> -        if (!op->derived && op->od->l3redirect_port) {
> -            const char *redirect_type = smap_get(&op->nbrp->options,
> -                                                 "redirect-type");
> -            if (redirect_type && !strcasecmp(redirect_type, "bridged")) {
> -                /* Packet is on a non gateway chassis and
> -                 * has an unresolved ARP on a network behind gateway
> -                 * chassis attached router port. Since, redirect type
> -                 * is "bridged", instead of calling "get_arp"
> -                 * on this node, we will redirect the packet to gateway
> -                 * chassis, by setting destination mac router port mac.*/
> -                ds_clear(match);
> -                ds_put_format(match, "outport == %s && "
> -                              "!is_chassis_resident(%s)", op->json_key,
> -                              op->od->l3redirect_port->json_key);
> -                ds_clear(actions);
> -                ds_put_format(actions, "eth.dst = %s; next;",
> -                              op->lrp_networks.ea_s);
> -
> -                ovn_lflow_add_with_hint(lflows, op->od,
> -                                        S_ROUTER_IN_ARP_RESOLVE, 50,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbrp->header_);
> -            }
> -        }
> -
> -        /* Drop IP traffic destined to router owned IPs. Part of it is dropped
> -         * in stage "lr_in_ip_input" but traffic that could have been unSNATed
> -         * but didn't match any existing session might still end up here.
> -         *
> -         * Priority 1.
> -         */
> -        build_lrouter_drop_own_dest(op, S_ROUTER_IN_ARP_RESOLVE, 1, true,
> -                                    lflows);
> -    } else if (op->od->n_router_ports && !lsp_is_router(op->nbsp)
> -               && strcmp(op->nbsp->type, "virtual")) {
> -        /* This is a logical switch port that backs a VM or a container.
> -         * Extract its addresses. For each of the address, go through all
> -         * the router ports attached to the switch (to which this port
> -         * connects) and if the address in question is reachable from the
> -         * router port, add an ARP/ND entry in that router's pipeline. */
> -
> -        for (size_t i = 0; i < op->n_lsp_addrs; i++) {
> -            const char *ea_s = op->lsp_addrs[i].ea_s;
> -            for (size_t j = 0; j < op->lsp_addrs[i].n_ipv4_addrs; j++) {
> -                const char *ip_s = op->lsp_addrs[i].ipv4_addrs[j].addr_s;
> -                for (size_t k = 0; k < op->od->n_router_ports; k++) {
> -                    /* Get the Logical_Router_Port that the
> -                     * Logical_Switch_Port is connected to, as
> -                     * 'peer'. */
> -                    const char *peer_name = smap_get(
> -                        &op->od->router_ports[k]->nbsp->options,
> -                        "router-port");
> -                    if (!peer_name) {
> -                        continue;
> -                    }
> -
> -                    struct ovn_port *peer = ovn_port_find(ports, peer_name);
> -                    if (!peer || !peer->nbrp) {
> -                        continue;
> -                    }
> -
> -                    if (!find_lrp_member_ip(peer, ip_s)) {
> -                        continue;
> -                    }
> -
> -                    ds_clear(match);
> -                    ds_put_format(match, "outport == %s && "
> -                                  REG_NEXT_HOP_IPV4 " == %s",
> -                                  peer->json_key, ip_s);
> -
> -                    ds_clear(actions);
> -                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> -                                            S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                            ds_cstr(match),
> -                                            ds_cstr(actions),
> -                                            &op->nbsp->header_);
> -                }
> -            }
> -
> -            for (size_t j = 0; j < op->lsp_addrs[i].n_ipv6_addrs; j++) {
> -                const char *ip_s = op->lsp_addrs[i].ipv6_addrs[j].addr_s;
> -                for (size_t k = 0; k < op->od->n_router_ports; k++) {
> -                    /* Get the Logical_Router_Port that the
> -                     * Logical_Switch_Port is connected to, as
> -                     * 'peer'. */
> -                    const char *peer_name = smap_get(
> -                        &op->od->router_ports[k]->nbsp->options,
> -                        "router-port");
> -                    if (!peer_name) {
> -                        continue;
> -                    }
> -
> -                    struct ovn_port *peer = ovn_port_find(ports, peer_name);
> -                    if (!peer || !peer->nbrp) {
> -                        continue;
> -                    }
> -
> -                    if (!find_lrp_member_ip(peer, ip_s)) {
> -                        continue;
> -                    }
> -
> -                    ds_clear(match);
> -                    ds_put_format(match, "outport == %s && "
> -                                  REG_NEXT_HOP_IPV6 " == %s",
> -                                  peer->json_key, ip_s);
> -
> -                    ds_clear(actions);
> -                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> -                                            S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                            ds_cstr(match),
> -                                            ds_cstr(actions),
> -                                            &op->nbsp->header_);
> -                }
> -            }
> -        }
> -    } else if (op->od->n_router_ports && !lsp_is_router(op->nbsp)
> -               && !strcmp(op->nbsp->type, "virtual")) {
> -        /* This is a virtual port. Add ARP replies for the virtual ip with
> -         * the mac of the present active virtual parent.
> -         * If the logical port doesn't have virtual parent set in
> -         * Port_Binding table, then add the flow to set eth.dst to
> -         * 00:00:00:00:00:00 and advance to next table so that ARP is
> -         * resolved by router pipeline using the arp{} action.
> -         * The MAC_Binding entry for the virtual ip might be invalid. */
> -        ovs_be32 ip;
> -
> -        const char *vip = smap_get(&op->nbsp->options,
> -                                   "virtual-ip");
> -        const char *virtual_parents = smap_get(&op->nbsp->options,
> -                                               "virtual-parents");
> -        if (!vip || !virtual_parents ||
> -            !ip_parse(vip, &ip) || !op->sb) {
> -            return;
> -        }
> -
> -        if (!op->sb->virtual_parent || !op->sb->virtual_parent[0] ||
> -            !op->sb->chassis) {
> -            /* The virtual port is not claimed yet. */
> -            for (size_t i = 0; i < op->od->n_router_ports; i++) {
> -                const char *peer_name = smap_get(
> -                    &op->od->router_ports[i]->nbsp->options,
> -                    "router-port");
> -                if (!peer_name) {
> -                    continue;
> -                }
> -
> -                struct ovn_port *peer = ovn_port_find(ports, peer_name);
> -                if (!peer || !peer->nbrp) {
> -                    continue;
> -                }
> -
> -                if (find_lrp_member_ip(peer, vip)) {
> -                    ds_clear(match);
> -                    ds_put_format(match, "outport == %s && "
> -                                  REG_NEXT_HOP_IPV4 " == %s",
> -                                  peer->json_key, vip);
> -
> -                    const char *arp_actions =
> -                                  "eth.dst = 00:00:00:00:00:00; next;";
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> -                                            S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                            ds_cstr(match),
> -                                            arp_actions,
> -                                            &op->nbsp->header_);
> -                    break;
> -                }
> -            }
> -        } else {
> -            struct ovn_port *vp =
> -                ovn_port_find(ports, op->sb->virtual_parent);
> -            if (!vp || !vp->nbsp) {
> -                return;
> -            }
> -
> -            for (size_t i = 0; i < vp->n_lsp_addrs; i++) {
> -                bool found_vip_network = false;
> -                const char *ea_s = vp->lsp_addrs[i].ea_s;
> -                for (size_t j = 0; j < vp->od->n_router_ports; j++) {
> -                    /* Get the Logical_Router_Port that the
> -                    * Logical_Switch_Port is connected to, as
> -                    * 'peer'. */
> -                    const char *peer_name = smap_get(
> -                        &vp->od->router_ports[j]->nbsp->options,
> -                        "router-port");
> -                    if (!peer_name) {
> -                        continue;
> -                    }
> -
> -                    struct ovn_port *peer =
> -                        ovn_port_find(ports, peer_name);
> -                    if (!peer || !peer->nbrp) {
> -                        continue;
> -                    }
> -
> -                    if (!find_lrp_member_ip(peer, vip)) {
> -                        continue;
> -                    }
> -
> -                    ds_clear(match);
> -                    ds_put_format(match, "outport == %s && "
> -                                  REG_NEXT_HOP_IPV4 " == %s",
> -                                  peer->json_key, vip);
> -
> -                    ds_clear(actions);
> -                    ds_put_format(actions, "eth.dst = %s; next;", ea_s);
> -                    ovn_lflow_add_with_hint(lflows, peer->od,
> -                                            S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                            ds_cstr(match),
> -                                            ds_cstr(actions),
> -                                            &op->nbsp->header_);
> -                    found_vip_network = true;
> -                    break;
> -                }
> -
> -                if (found_vip_network) {
> -                    break;
> -                }
> -            }
> -        }
> -    } else if (lsp_is_router(op->nbsp)) {
> -        /* This is a logical switch port that connects to a router. */
> -
> -        /* The peer of this switch port is the router port for which
> -         * we need to add logical flows such that it can resolve
> -         * ARP entries for all the other router ports connected to
> -         * the switch in question. */
> -
> -        const char *peer_name = smap_get(&op->nbsp->options,
> -                                         "router-port");
> -        if (!peer_name) {
> -            return;
> -        }
> -
> -        struct ovn_port *peer = ovn_port_find(ports, peer_name);
> -        if (!peer || !peer->nbrp) {
> -            return;
> -        }
> -
> -        if (peer->od->nbr &&
> -            smap_get_bool(&peer->od->nbr->options,
> -                          "dynamic_neigh_routers", false)) {
> -            return;
> -        }
> -
> -        for (size_t i = 0; i < op->od->n_router_ports; i++) {
> -            const char *router_port_name = smap_get(
> -                                &op->od->router_ports[i]->nbsp->options,
> -                                "router-port");
> -            struct ovn_port *router_port = ovn_port_find(ports,
> -                                                         router_port_name);
> -            if (!router_port || !router_port->nbrp) {
> -                continue;
> -            }
> -
> -            /* Skip the router port under consideration. */
> -            if (router_port == peer) {
> -               continue;
> -            }
> -
> -            if (router_port->lrp_networks.n_ipv4_addrs) {
> -                ds_clear(match);
> -                ds_put_format(match, "outport == %s && "
> -                              REG_NEXT_HOP_IPV4 " == ",
> -                              peer->json_key);
> -                op_put_v4_networks(match, router_port, false);
> -
> -                ds_clear(actions);
> -                ds_put_format(actions, "eth.dst = %s; next;",
> -                                          router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> -                                        S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> -            }
> +                          "ip4.mcast || ip6.mcast", "drop;");
> +        }
> +    }
> +}
>   
> -            if (router_port->lrp_networks.n_ipv6_addrs) {
> -                ds_clear(match);
> -                ds_put_format(match, "outport == %s && "
> -                              REG_NEXT_HOP_IPV6 " == ",
> -                              peer->json_key);
> -                op_put_v6_networks(match, router_port);
> +/* Logical router ingress table POLICY: Policy.
> + *
> + * A packet that arrives at this table is an IP packet that should be
> + * permitted/denied/rerouted to the address in the rule's nexthop.
> + * This table sets outport to the correct out_port,
> + * eth.src to the output port's MAC address,
> + * and REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 to the next-hop IP address
> + * (leaving 'ip[46].dst', the packet’s final destination, unchanged), and
> + * advances to the next table for ARP/ND resolution. */
> +static void
> +build_ingress_policy_flows_for_lrouter(
> +        struct ovn_datapath *od, struct hmap *lflows,
> +        struct hmap *ports)
> +{
> +    if (od->nbr) {
> +        /* Convert routing policies to flows. */
> +        uint16_t ecmp_group_id = 1;
> +        for (int i = 0; i < od->nbr->n_policies; i++) {
> +            const struct nbrec_logical_router_policy *rule
> +                = od->nbr->policies[i];
> +            bool is_ecmp_reroute =
> +                (!strcmp(rule->action, "reroute") && rule->n_nexthops > 1);
>   
> -                ds_clear(actions);
> -                ds_put_format(actions, "eth.dst = %s; next;",
> -                              router_port->lrp_networks.ea_s);
> -                ovn_lflow_add_with_hint(lflows, peer->od,
> -                                        S_ROUTER_IN_ARP_RESOLVE, 100,
> -                                        ds_cstr(match), ds_cstr(actions),
> -                                        &op->nbsp->header_);
> +            if (is_ecmp_reroute) {
> +                build_ecmp_routing_policy_flows(lflows, od, ports, rule,
> +                                                ecmp_group_id);
> +                ecmp_group_id++;
> +            } else {
> +                build_routing_policy_flow(lflows, od, ports, rule,
> +                                          &rule->header_);
>               }
>           }
>       }
> +}
> +
> +/* Local router ingress table ARP_RESOLVE: ARP Resolution.
> + *
> + * Any unicast packet that reaches this table is an IP packet whose
> + * next-hop IP address is in REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6
> + * (ip4.dst/ipv6.dst is the final destination).
> + * This table resolves the IP address in
> + * REG_NEXT_HOP_IPV4/REG_NEXT_HOP_IPV6 into an output port in outport and
> + * an Ethernet address in eth.dst.
> + */
> +static void
> +build_arp_resolve_flows_for_lrouter_port(
> +        struct ovn_port *op, struct hmap *lflows)
> +{
> +    if (op->nbsp && !lsp_is_enabled(op->nbsp)) {
> +        return;
> +    }
>   
> +    if (op->nbrp) {
> +        /* Drop IP traffic destined to router owned IPs. Part of it is dropped
> +         * in stage "lr_in_ip_input" but traffic that could have been unSNATed
> +         * but didn't match any existing session might still end up here.
> +         *
> +         * Priority 1.
> +         */
> +        build_lrouter_drop_own_dest(op, S_ROUTER_IN_ARP_RESOLVE, 1, true,
> +                                    lflows);
> +    }
>   }
>   
>   /* Local router ingress table CHK_PKT_LEN: Check packet length.
> @@ -10462,13 +8099,6 @@ build_check_pkt_len_flows_for_lrouter(
>           struct ds *match, struct ds *actions)
>   {
>       if (od->nbr) {
> -
> -        /* Packets are allowed by default. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_CHK_PKT_LEN, 0, "1",
> -                      "next;");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_LARGER_PKTS, 0, "1",
> -                      "next;");
> -
>           if (od->l3dgw_port && od->l3redirect_port) {
>               int gw_mtu = 0;
>               if (od->l3dgw_port->nbrp) {
> @@ -10594,9 +8224,6 @@ build_gateway_redirect_flows_for_lrouter(
>                                       ds_cstr(match), ds_cstr(actions),
>                                       stage_hint);
>           }
> -
> -        /* Packets are allowed by default. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_GW_REDIRECT, 0, "1", "next;");
>       }
>   }
>   
> @@ -10649,291 +8276,7 @@ build_arp_request_flows_for_lrouter(
>                                       ds_cstr(match), ds_cstr(actions),
>                                       &route->header_);
>           }
> -
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_REQUEST, 100,
> -                      "eth.dst == 00:00:00:00:00:00 && ip4",
> -                      "arp { "
> -                      "eth.dst = ff:ff:ff:ff:ff:ff; "
> -                      "arp.spa = " REG_SRC_IPV4 "; "
> -                      "arp.tpa = " REG_NEXT_HOP_IPV4 "; "
> -                      "arp.op = 1; " /* ARP request */
> -                      "output; "
> -                      "};");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_REQUEST, 100,
> -                      "eth.dst == 00:00:00:00:00:00 && ip6",
> -                      "nd_ns { "
> -                      "nd.target = " REG_NEXT_HOP_IPV6 "; "
> -                      "output; "
> -                      "};");
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_REQUEST, 0, "1", "output;");
> -    }
> -}
> -
> -/* Logical router egress table DELIVERY: Delivery (priority 100-110).
> - *
> - * Priority 100 rules deliver packets to enabled logical ports.
> - * Priority 110 rules match multicast packets and update the source
> - * mac before delivering to enabled logical ports. IP multicast traffic
> - * bypasses S_ROUTER_IN_IP_ROUTING route lookups.
> - */
> -static void
> -build_egress_delivery_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (op->nbrp) {
> -        if (!lrport_is_enabled(op->nbrp)) {
> -            /* Drop packets to disabled logical ports (since logical flow
> -             * tables are default-drop). */
> -            return;
> -        }
> -
> -        if (op->derived) {
> -            /* No egress packets should be processed in the context of
> -             * a chassisredirect port.  The chassisredirect port should
> -             * be replaced by the l3dgw port in the local output
> -             * pipeline stage before egress processing. */
> -            return;
> -        }
> -
> -        /* If multicast relay is enabled then also adjust source mac for IP
> -         * multicast traffic.
> -         */
> -        if (op->od->mcast_info.rtr.relay) {
> -            ds_clear(match);
> -            ds_clear(actions);
> -            ds_put_format(match, "(ip4.mcast || ip6.mcast) && outport == %s",
> -                          op->json_key);
> -            ds_put_format(actions, "eth.src = %s; output;",
> -                          op->lrp_networks.ea_s);
> -            ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_DELIVERY, 110,
> -                          ds_cstr(match), ds_cstr(actions));
> -        }
> -
> -        ds_clear(match);
> -        ds_put_format(match, "outport == %s", op->json_key);
> -        ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_DELIVERY, 100,
> -                      ds_cstr(match), "output;");
> -    }
> -
> -}
> -
> -static void
> -build_misc_local_traffic_drop_flows_for_lrouter(
> -        struct ovn_datapath *od, struct hmap *lflows)
> -{
> -    if (od->nbr) {
> -        /* L3 admission control: drop multicast and broadcast source, localhost
> -         * source or destination, and zero network source or destination
> -         * (priority 100). */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 100,
> -                      "ip4.src_mcast ||"
> -                      "ip4.src == 255.255.255.255 || "
> -                      "ip4.src == 127.0.0.0/8 || "
> -                      "ip4.dst == 127.0.0.0/8 || "
> -                      "ip4.src == 0.0.0.0/8 || "
> -                      "ip4.dst == 0.0.0.0/8",
> -                      "drop;");
> -
> -        /* Drop ARP packets (priority 85). ARP request packets for router's own
> -         * IPs are handled with priority-90 flows.
> -         * Drop IPv6 ND packets (priority 85). ND NA packets for router's own
> -         * IPs are handled with priority-90 flows.
> -         */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 85,
> -                      "arp || nd", "drop;");
> -
> -        /* Allow IPv6 multicast traffic that's supposed to reach the
> -         * router pipeline (e.g., router solicitations).
> -         */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 84, "nd_rs || nd_ra",
> -                      "next;");
> -
> -        /* Drop other reserved multicast. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 83,
> -                      "ip6.mcast_rsvd", "drop;");
> -
> -        /* Allow other multicast if relay enabled (priority 82). */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 82,
> -                      "ip4.mcast || ip6.mcast",
> -                      od->mcast_info.rtr.relay ? "next;" : "drop;");
> -
> -        /* Drop Ethernet local broadcast.  By definition this traffic should
> -         * not be forwarded.*/
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 50,
> -                      "eth.bcast", "drop;");
> -
> -        /* TTL discard */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 30,
> -                      "ip4 && ip.ttl == {0, 1}", "drop;");
> -
> -        /* Pass other traffic not already handled to the next table for
> -         * routing. */
> -        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_INPUT, 0, "1", "next;");
> -    }
> -}
> -
> -static void
> -build_dhcpv6_reply_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match)
> -{
> -    if (op->nbrp && (!op->derived)) {
> -        for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -            ds_clear(match);
> -            ds_put_format(match, "ip6.dst == %s && udp.src == 547 &&"
> -                          " udp.dst == 546",
> -                          op->lrp_networks.ipv6_addrs[i].addr_s);
> -            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_IP_INPUT, 100,
> -                          ds_cstr(match),
> -                          "reg0 = 0; handle_dhcpv6_reply;");
> -        }
> -    }
> -
> -}
> -
> -static void
> -build_ipv6_input_flows_for_lrouter_port(
> -        struct ovn_port *op, struct hmap *lflows,
> -        struct ds *match, struct ds *actions)
> -{
> -    if (op->nbrp && (!op->derived)) {
> -        /* No ingress packets are accepted on a chassisredirect
> -         * port, so no need to program flows for that port. */
> -        if (op->lrp_networks.n_ipv6_addrs) {
> -            /* ICMPv6 echo reply.  These flows reply to echo requests
> -             * received for the router's IP address. */
> -            ds_clear(match);
> -            ds_put_cstr(match, "ip6.dst == ");
> -            op_put_v6_networks(match, op);
> -            ds_put_cstr(match, " && icmp6.type == 128 && icmp6.code == 0");
> -
> -            const char *lrp_actions =
> -                        "ip6.dst <-> ip6.src; "
> -                        "ip.ttl = 255; "
> -                        "icmp6.type = 129; "
> -                        "flags.loopback = 1; "
> -                        "next; ";
> -            ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 90,
> -                                    ds_cstr(match), lrp_actions,
> -                                    &op->nbrp->header_);
> -        }
> -
> -        /* ND reply.  These flows reply to ND solicitations for the
> -         * router's own IP address. */
> -        for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -            ds_clear(match);
> -            if (op->od->l3dgw_port && op == op->od->l3dgw_port
> -                && op->od->l3redirect_port) {
> -                /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s
> -                 * should only be sent from the gateway chassi, so that
> -                 * upstream MAC learning points to the gateway chassis.
> -                 * Also need to avoid generation of multiple ND replies
> -                 * from different chassis. */
> -                ds_put_format(match, "is_chassis_resident(%s)",
> -                              op->od->l3redirect_port->json_key);
> -            }
> -
> -            build_lrouter_nd_flow(op->od, op, "nd_na_router",
> -                                  op->lrp_networks.ipv6_addrs[i].addr_s,
> -                                  op->lrp_networks.ipv6_addrs[i].sn_addr_s,
> -                                  REG_INPORT_ETH_ADDR, match, false, 90,
> -                                  &op->nbrp->header_, lflows);
> -        }
> -
> -        /* UDP/TCP/SCTP port unreachable */
> -        if (!smap_get(&op->od->nbr->options, "chassis")
> -            && !op->od->l3dgw_port) {
> -            for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip6 && ip6.dst == %s && !ip.later_frag && tcp",
> -                              op->lrp_networks.ipv6_addrs[i].addr_s);
> -                const char *action = "tcp_reset {"
> -                                     "eth.dst <-> eth.src; "
> -                                     "ip6.dst <-> ip6.src; "
> -                                     "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip6 && ip6.dst == %s && !ip.later_frag && sctp",
> -                              op->lrp_networks.ipv6_addrs[i].addr_s);
> -                action = "sctp_abort {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip6.dst <-> ip6.src; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip6 && ip6.dst == %s && !ip.later_frag && udp",
> -                              op->lrp_networks.ipv6_addrs[i].addr_s);
> -                action = "icmp6 {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip6.dst <-> ip6.src; "
> -                         "ip.ttl = 255; "
> -                         "icmp6.type = 1; "
> -                         "icmp6.code = 4; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip6 && ip6.dst == %s && !ip.later_frag",
> -                              op->lrp_networks.ipv6_addrs[i].addr_s);
> -                action = "icmp6 {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip6.dst <-> ip6.src; "
> -                         "ip.ttl = 255; "
> -                         "icmp6.type = 1; "
> -                         "icmp6.code = 3; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        70, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -            }
> -        }
> -
> -        /* ICMPv6 time exceeded */
> -        for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> -            /* skip link-local address */
> -            if (in6_is_lla(&op->lrp_networks.ipv6_addrs[i].network)) {
> -                continue;
> -            }
> -
> -            ds_clear(match);
> -            ds_clear(actions);
> -
> -            ds_put_format(match,
> -                          "inport == %s && ip6 && "
> -                          "ip6.src == %s/%d && "
> -                          "ip.ttl == {0, 1} && !ip.later_frag",
> -                          op->json_key,
> -                          op->lrp_networks.ipv6_addrs[i].network_s,
> -                          op->lrp_networks.ipv6_addrs[i].plen);
> -            ds_put_format(actions,
> -                          "icmp6 {"
> -                          "eth.dst <-> eth.src; "
> -                          "ip6.dst = ip6.src; "
> -                          "ip6.src = %s; "
> -                          "ip.ttl = 255; "
> -                          "icmp6.type = 3; /* Time exceeded */ "
> -                          "icmp6.code = 0; /* TTL exceeded in transit */ "
> -                          "next; };",
> -                          op->lrp_networks.ipv6_addrs[i].addr_s);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 40,
> -                                    ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbrp->header_);
> -        }
>       }
> -
>   }
>   
>   static void
> @@ -10988,113 +8331,11 @@ build_lrouter_arp_nd_for_datapath(struct ovn_datapath *od,
>   static void
>   build_lrouter_ipv4_ip_input(struct ovn_port *op,
>                               struct hmap *lflows,
> -                            struct ds *match, struct ds *actions)
> +                            struct ds *match)
>   {
>       /* No ingress packets are accepted on a chassisredirect
>        * port, so no need to program flows for that port. */
>       if (op->nbrp && (!op->derived)) {
> -        if (op->lrp_networks.n_ipv4_addrs) {
> -            /* L3 admission control: drop packets that originate from an
> -             * IPv4 address owned by the router or a broadcast address
> -             * known to the router (priority 100). */
> -            ds_clear(match);
> -            ds_put_cstr(match, "ip4.src == ");
> -            op_put_v4_networks(match, op, true);
> -            ds_put_cstr(match, " && "REGBIT_EGRESS_LOOPBACK" == 0");
> -            ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 100,
> -                                    ds_cstr(match), "drop;",
> -                                    &op->nbrp->header_);
> -
> -            /* ICMP echo reply.  These flows reply to ICMP echo requests
> -             * received for the router's IP address. Since packets only
> -             * get here as part of the logical router datapath, the inport
> -             * (i.e. the incoming locally attached net) does not matter.
> -             * The ip.ttl also does not matter (RFC1812 section 4.2.2.9) */
> -            ds_clear(match);
> -            ds_put_cstr(match, "ip4.dst == ");
> -            op_put_v4_networks(match, op, false);
> -            ds_put_cstr(match, " && icmp4.type == 8 && icmp4.code == 0");
> -
> -            const char * icmp_actions = "ip4.dst <-> ip4.src; "
> -                          "ip.ttl = 255; "
> -                          "icmp4.type = 0; "
> -                          "flags.loopback = 1; "
> -                          "next; ";
> -            ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 90,
> -                                    ds_cstr(match), icmp_actions,
> -                                    &op->nbrp->header_);
> -        }
> -
> -        /* BFD msg handling */
> -        build_lrouter_bfd_flows(lflows, op);
> -
> -        /* ICMP time exceeded */
> -        for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -            ds_clear(match);
> -            ds_clear(actions);
> -
> -            ds_put_format(match,
> -                          "inport == %s && ip4 && "
> -                          "ip.ttl == {0, 1} && !ip.later_frag", op->json_key);
> -            ds_put_format(actions,
> -                          "icmp4 {"
> -                          "eth.dst <-> eth.src; "
> -                          "icmp4.type = 11; /* Time exceeded */ "
> -                          "icmp4.code = 0; /* TTL exceeded in transit */ "
> -                          "ip4.dst = ip4.src; "
> -                          "ip4.src = %s; "
> -                          "ip.ttl = 255; "
> -                          "next; };",
> -                          op->lrp_networks.ipv4_addrs[i].addr_s);
> -            ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT, 40,
> -                                    ds_cstr(match), ds_cstr(actions),
> -                                    &op->nbrp->header_);
> -        }
> -
> -        /* ARP reply.  These flows reply to ARP requests for the router's own
> -         * IP address. */
> -        for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -            ds_clear(match);
> -            ds_put_format(match, "arp.spa == %s/%u",
> -                          op->lrp_networks.ipv4_addrs[i].network_s,
> -                          op->lrp_networks.ipv4_addrs[i].plen);
> -
> -            if (op->od->l3dgw_port && op->od->l3redirect_port && op->peer
> -                && op->peer->od->n_localnet_ports) {
> -                bool add_chassis_resident_check = false;
> -                if (op == op->od->l3dgw_port) {
> -                    /* Traffic with eth.src = l3dgw_port->lrp_networks.ea_s
> -                     * should only be sent from the gateway chassis, so that
> -                     * upstream MAC learning points to the gateway chassis.
> -                     * Also need to avoid generation of multiple ARP responses
> -                     * from different chassis. */
> -                    add_chassis_resident_check = true;
> -                } else {
> -                    /* Check if the option 'reside-on-redirect-chassis'
> -                     * is set to true on the router port. If set to true
> -                     * and if peer's logical switch has a localnet port, it
> -                     * means the router pipeline for the packets from
> -                     * peer's logical switch is be run on the chassis
> -                     * hosting the gateway port and it should reply to the
> -                     * ARP requests for the router port IPs.
> -                     */
> -                    add_chassis_resident_check = smap_get_bool(
> -                        &op->nbrp->options,
> -                        "reside-on-redirect-chassis", false);
> -                }
> -
> -                if (add_chassis_resident_check) {
> -                    ds_put_format(match, " && is_chassis_resident(%s)",
> -                                  op->od->l3redirect_port->json_key);
> -                }
> -            }
> -
> -            build_lrouter_arp_flow(op->od, op,
> -                                   op->lrp_networks.ipv4_addrs[i].addr_s,
> -                                   REG_INPORT_ETH_ADDR, match, false, 90,
> -                                   &op->nbrp->header_, lflows);
> -        }
> -
>           const char *ip_address;
>           if (sset_count(&op->od->lb_ips_v4)) {
>               ds_clear(match);
> @@ -11128,66 +8369,6 @@ build_lrouter_ipv4_ip_input(struct ovn_port *op,
>                                     match, false, 90, NULL, lflows);
>           }
>   
> -        if (!smap_get(&op->od->nbr->options, "chassis")
> -            && !op->od->l3dgw_port) {
> -            /* UDP/TCP/SCTP port unreachable. */
> -            for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip4 && ip4.dst == %s && !ip.later_frag && udp",
> -                              op->lrp_networks.ipv4_addrs[i].addr_s);
> -                const char *action = "icmp4 {"
> -                                     "eth.dst <-> eth.src; "
> -                                     "ip4.dst <-> ip4.src; "
> -                                     "ip.ttl = 255; "
> -                                     "icmp4.type = 3; "
> -                                     "icmp4.code = 3; "
> -                                     "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip4 && ip4.dst == %s && !ip.later_frag && tcp",
> -                              op->lrp_networks.ipv4_addrs[i].addr_s);
> -                action = "tcp_reset {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip4.dst <-> ip4.src; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip4 && ip4.dst == %s && !ip.later_frag && sctp",
> -                              op->lrp_networks.ipv4_addrs[i].addr_s);
> -                action = "sctp_abort {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip4.dst <-> ip4.src; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        80, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -
> -                ds_clear(match);
> -                ds_put_format(match,
> -                              "ip4 && ip4.dst == %s && !ip.later_frag",
> -                              op->lrp_networks.ipv4_addrs[i].addr_s);
> -                action = "icmp4 {"
> -                         "eth.dst <-> eth.src; "
> -                         "ip4.dst <-> ip4.src; "
> -                         "ip.ttl = 255; "
> -                         "icmp4.type = 3; "
> -                         "icmp4.code = 2; "
> -                         "next; };";
> -                ovn_lflow_add_with_hint(lflows, op->od, S_ROUTER_IN_IP_INPUT,
> -                                        70, ds_cstr(match), action,
> -                                        &op->nbrp->header_);
> -            }
> -        }
> -
>           /* Drop IP traffic destined to router owned IPs except if the IP is
>            * also a SNAT IP. Those are dropped later, in stage
>            * "lr_in_arp_resolve", if unSNAT was unsuccessful.
> @@ -11687,20 +8868,6 @@ build_lrouter_nat_defrag_and_lb(struct ovn_datapath *od,
>           return;
>       }
>   
> -    /* Packets are allowed by default. */
> -    ovn_lflow_add(lflows, od, S_ROUTER_IN_DEFRAG, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_IN_UNSNAT, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_OUT_SNAT, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_IN_DNAT, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_OUT_UNDNAT, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_OUT_EGR_LOOP, 0, "1", "next;");
> -    ovn_lflow_add(lflows, od, S_ROUTER_IN_ECMP_STATEFUL, 0, "1", "next;");
> -
> -    /* Send the IPv6 NS packets to next table. When ovn-controller
> -     * generates IPv6 NS (for the action - nd_ns{}), the injected
> -     * packet would go through conntrack - which is not required. */
> -    ovn_lflow_add(lflows, od, S_ROUTER_OUT_SNAT, 120, "nd_ns", "next;");
> -
>       /* NAT rules are only valid on Gateway routers and routers with
>        * l3dgw_port (router has a port with gateway chassis
>        * specified). */
> @@ -11904,6 +9071,33 @@ struct lswitch_flow_build_info {
>       struct ds actions;
>   };
>   
> +static void
> +build_lswitch_flows(struct hmap *datapaths, struct hmap *lflows)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    struct ds actions = DS_EMPTY_INITIALIZER;
> +    struct ovn_datapath *od;
> +
> +    /* Ingress table 23: Destination lookup for unknown MACs (priority 0). */
> +    HMAP_FOR_EACH (od, key_node, datapaths) {
> +        if (!od->nbs) {
> +            continue;
> +        }
> +
> +        if (od->has_unknown) {
> +            ovn_lflow_add_unique(lflows, od, S_SWITCH_IN_L2_UNKNOWN, 50,
> +                                 "outport == \"none\"",
> +                                 "outport = \""MC_UNKNOWN "\"; output;");
> +        } else {
> +            ovn_lflow_add(lflows, od, S_SWITCH_IN_L2_UNKNOWN, 50,
> +                          "outport == \"none\"", "drop;");
> +        }
> +    }
> +
> +    ds_destroy(&match);
> +    ds_destroy(&actions);
> +}
> +
>   /* Helper function to combine all lflow generation which is iterated by
>    * datapath.
>    *
> @@ -11920,33 +9114,20 @@ build_lswitch_and_lrouter_iterate_by_od(struct ovn_datapath *od,
>                                            lsi->meter_groups, lsi->lbs);
>   
>       build_fwd_group_lflows(od, lsi->lflows);
> -    build_lswitch_lflows_admission_control(od, lsi->lflows);
> -    build_lswitch_input_port_sec_od(od, lsi->lflows);
> -    build_lswitch_learn_fdb_od(od, lsi->lflows);
> -    build_lswitch_arp_nd_responder_default(od, lsi->lflows);
> -    build_lswitch_dns_lookup_and_response(od, lsi->lflows);
> -    build_lswitch_dhcp_and_dns_defaults(od, lsi->lflows);
>       build_lswitch_destination_lookup_bmcast(od, lsi->lflows, &lsi->actions);
> -    build_lswitch_output_port_sec_od(od, lsi->lflows);
>   
>       /* Build Logical Router Flows. */
> -    build_adm_ctrl_flows_for_lrouter(od, lsi->lflows);
> -    build_neigh_learning_flows_for_lrouter(od, lsi->lflows, &lsi->match,
> -                                           &lsi->actions);
> -    build_ND_RA_flows_for_lrouter(od, lsi->lflows);
>       build_static_route_flows_for_lrouter(od, lsi->lflows, lsi->ports,
>                                            lsi->bfd_connections);
>       build_mcast_lookup_flows_for_lrouter(od, lsi->lflows, &lsi->match,
>                                            &lsi->actions);
>       build_ingress_policy_flows_for_lrouter(od, lsi->lflows, lsi->ports);
> -    build_arp_resolve_flows_for_lrouter(od, lsi->lflows);
>       build_check_pkt_len_flows_for_lrouter(od, lsi->lflows, lsi->ports,
>                                             &lsi->match, &lsi->actions);
>       build_gateway_redirect_flows_for_lrouter(od, lsi->lflows, &lsi->match,
>                                                &lsi->actions);
>       build_arp_request_flows_for_lrouter(od, lsi->lflows, &lsi->match,
>                                           &lsi->actions);
> -    build_misc_local_traffic_drop_flows_for_lrouter(od, lsi->lflows);
>       build_lrouter_arp_nd_for_datapath(od, lsi->lflows);
>       build_lrouter_nat_defrag_and_lb(od, lsi->lflows, lsi->meter_groups,
>                                       lsi->lbs, &lsi->match, &lsi->actions);
> @@ -11958,43 +9139,14 @@ static void
>   build_lswitch_and_lrouter_iterate_by_op(struct ovn_port *op,
>                                           struct lswitch_flow_build_info *lsi)
>   {
> -    /* Build Logical Switch Flows. */
> -    build_lswitch_input_port_sec_op(op, lsi->lflows, &lsi->actions,
> -                                    &lsi->match);
> -    build_lswitch_learn_fdb_op(op, lsi->lflows, &lsi->actions,
> -                               &lsi->match);
> -    build_lswitch_arp_nd_responder_skip_local(op, lsi->lflows,
> -                                              &lsi->match);
> -    build_lswitch_arp_nd_responder_known_ips(op, lsi->lflows,
> -                                             lsi->ports,
> -                                             &lsi->actions,
> -                                             &lsi->match);
>       build_lswitch_dhcp_options_and_response(op, lsi->lflows);
>       build_lswitch_external_port(op, lsi->lflows);
>       build_lswitch_ip_unicast_lookup(op, lsi->lflows, lsi->mcgroups,
>                                       &lsi->actions, &lsi->match);
> -    build_lswitch_output_port_sec_op(op, lsi->lflows,
> -                                     &lsi->actions, &lsi->match);
>   
>       /* Build Logical Router Flows. */
> -    build_adm_ctrl_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
> -                                          &lsi->actions);
> -    build_neigh_learning_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
> -                                                &lsi->actions);
> -    build_ip_routing_flows_for_lrouter_port(op, lsi->lflows);
> -    build_ND_RA_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
> -                                       &lsi->actions);
> -    build_arp_resolve_flows_for_lrouter_port(op, lsi->lflows, lsi->ports,
> -                                             &lsi->match, &lsi->actions);
> -    build_egress_delivery_flows_for_lrouter_port(op, lsi->lflows, &lsi->match,
> -                                                 &lsi->actions);
> -    build_dhcpv6_reply_flows_for_lrouter_port(op, lsi->lflows, &lsi->match);
> -    build_ipv6_input_flows_for_lrouter_port(op, lsi->lflows,
> -                                            &lsi->match, &lsi->actions);
> -    build_lrouter_ipv4_ip_input(op, lsi->lflows,
> -                                &lsi->match, &lsi->actions);
> -    build_lrouter_force_snat_flows_op(op, lsi->lflows, &lsi->match,
> -                                      &lsi->actions);
> +    build_arp_resolve_flows_for_lrouter_port(op, lsi->lflows);
> +    build_lrouter_ipv4_ip_input(op, lsi->lflows, &lsi->match);
>   }
>   
>   struct lflows_thread_pool {
> @@ -12536,6 +9688,55 @@ build_lflows(struct northd_context *ctx, struct hmap *datapaths,
>       }
>   }
>   
> +static void
> +sync_datapath_options(struct hmap *datapaths)
> +{
> +    struct ovn_datapath *od;
> +    HMAP_FOR_EACH (od, key_node, datapaths) {
> +        if (!od->sb) {
> +            continue;
> +        }
> +
> +        struct smap options = SMAP_INITIALIZER(&options);
> +        if (od->nbs) {
> +            if (od->has_stateful_acl) {
> +                smap_add(&options, "has-stateful-acls", "true");
> +            }
> +            if (od->has_acls) {
> +                smap_add(&options, "has-acls", "true");
> +            }
> +            if (od->has_lb_vip) {
> +                smap_add(&options, "has-lb-vips", "true");
> +            }
> +            if (od->has_unknown) {
> +                smap_add(&options, "has-unknown", "true");
> +            }
> +            if (od->vlan_passthru) {
> +                smap_add(&options, "vlan-passthru", "true");
> +            }
> +            if (od->has_dns_records) {
> +                smap_add(&options, "has-dns-records", "true");
> +            }
> +        } else {
> +            if (od->always_learn_from_arp_request) {
> +                smap_add(&options, "always-learn-from-arp-request", "true");
> +            }
> +            if (od->mcast_info.rtr.relay) {
> +                smap_add(&options, "mcast-relay", "true");
> +            }
> +            if (od->l3dgw_port) {
> +                smap_add(&options, "has-l3dgw-port", "true");
> +            }
> +            if (od->lb_force_snat_router_ip) {
> +                smap_add(&options, "lb-force-snat-router-ip", "true");
> +            }
> +        }
> +
> +        sbrec_datapath_binding_set_options(od->sb, &options);
> +        smap_destroy(&options);
> +    }
> +}
> +
>   static void
>   sync_address_set(struct northd_context *ctx, const char *name,
>                    const char **addrs, size_t n_addrs,
> @@ -13347,7 +10548,7 @@ ovnnb_db_run(struct northd_context *ctx,
>       build_lflows(ctx, datapaths, ports, &port_groups, &mcast_groups,
>                    &igmp_groups, &meter_groups, &lbs, &bfd_connections);
>       ovn_update_ipv6_prefix(ports);
> -
> +    sync_datapath_options(datapaths);
>       sync_address_sets(ctx);
>       sync_port_groups(ctx, &port_groups);
>       sync_meters(ctx, &meter_groups);
> @@ -14228,6 +11429,8 @@ main(int argc, char *argv[])
>                          &sbrec_datapath_binding_col_tunnel_key);
>       add_column_noalert(ovnsb_idl_loop.idl,
>                          &sbrec_datapath_binding_col_load_balancers);
> +    add_column_noalert(ovnsb_idl_loop.idl,
> +                       &sbrec_datapath_binding_col_options);
>       add_column_noalert(ovnsb_idl_loop.idl,
>                          &sbrec_datapath_binding_col_external_ids);
>   
> @@ -14243,6 +11446,8 @@ main(int argc, char *argv[])
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_port_binding_col_type);
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_port_binding_col_options);
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_port_binding_col_mac);
> +    add_column_noalert(ovnsb_idl_loop.idl,
> +                       &sbrec_port_binding_col_port_security);
>       add_column_noalert(ovnsb_idl_loop.idl,
>                          &sbrec_port_binding_col_nat_addresses);
>       ovsdb_idl_add_column(ovnsb_idl_loop.idl, &sbrec_port_binding_col_chassis);
> @@ -14403,6 +11608,8 @@ main(int argc, char *argv[])
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_load_balancer_col_vips);
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_load_balancer_col_protocol);
>       add_column_noalert(ovnsb_idl_loop.idl, &sbrec_load_balancer_col_options);
> +    add_column_noalert(ovnsb_idl_loop.idl,
> +                       &sbrec_load_balancer_col_selection_fields);
>       add_column_noalert(ovnsb_idl_loop.idl,
>                          &sbrec_load_balancer_col_external_ids);
>   
> diff --git a/ovn-sb.ovsschema b/ovn-sb.ovsschema
> index bbf60781dd..15c5224b7f 100644
> --- a/ovn-sb.ovsschema
> +++ b/ovn-sb.ovsschema
> @@ -1,7 +1,7 @@
>   {
>       "name": "OVN_Southbound",
> -    "version": "20.18.0",
> -    "cksum": "1816525029 26536",
> +    "version": "21.1.0",
> +    "cksum": "3048733342 27198",
>       "tables": {
>           "SB_Global": {
>               "columns": {
> @@ -171,6 +171,9 @@
>                                                      "refType": "weak"},
>                                               "min": 0,
>                                               "max": "unlimited"}},
> +                "options": {
> +                    "type": {"key": "string", "value": "string",
> +                             "min": 0, "max": "unlimited"}},
>                   "external_ids": {
>                       "type": {"key": "string", "value": "string",
>                                "min": 0, "max": "unlimited"}}},
> @@ -222,6 +225,9 @@
>                   "mac": {"type": {"key": "string",
>                                    "min": 0,
>                                    "max": "unlimited"}},
> +                "port_security": {"type": {"key": "string",
> +                                           "min": 0,
> +                                           "max": "unlimited"}},
>                   "nat_addresses": {"type": {"key": "string",
>                                              "min": 0,
>                                              "max": "unlimited"}},
> @@ -487,6 +493,12 @@
>                                 "value": "string",
>                                 "min": 0,
>                                 "max": "unlimited"}},
> +                "selection_fields": {
> +                    "type": {"key": {"type": "string",
> +                             "enum": ["set",
> +                                ["eth_src", "eth_dst", "ip_src", "ip_dst",
> +                                 "tp_src", "tp_dst"]]},
> +                             "min": 0, "max": "unlimited"}},
>                   "external_ids": {
>                       "type": {"key": "string", "value": "string",
>                                "min": 0, "max": "unlimited"}}},
> diff --git a/ovn-sb.xml b/ovn-sb.xml
> index b29866e88d..70ad3f8f5e 100644
> --- a/ovn-sb.xml
> +++ b/ovn-sb.xml
> @@ -2630,6 +2630,12 @@ tcp.flags = RST;
>         </p>
>       </column>
>   
> +    <column name="options">
> +      <p>
> +        Options.
> +      </p>
> +    </column>
> +
>       <group title="OVN_Northbound Relationship">
>         <p>
>           Each row in <ref table="Datapath_Binding"/> is associated with some
> @@ -2873,6 +2879,12 @@ tcp.flags = RST;
>           </p>
>         </column>
>   
> +      <column name="port_security">
> +        <p>
> +          Port security addresses.
> +        </p>
> +      </column>
> +
>         <column name="type">
>           <p>
>             A type for this logical port.  Logical ports can be used to model other
> @@ -4300,6 +4312,10 @@ tcp.flags = RST;
>         Datapaths to which this load balancer applies to.
>       </column>
>   
> +    <column name="selection_fields">
> +      Selection fields.
> +    </column>
> +
>       <group title="Load_Balancer options">
>       <column name="options" key="hairpin_snat_ip">
>         IP to be used as source IP for packets that have been hair-pinned after
> diff --git a/utilities/ovn-dbctl.c b/utilities/ovn-dbctl.c
> index 9c3e219159..9e9d3b772e 100644
> --- a/utilities/ovn-dbctl.c
> +++ b/utilities/ovn-dbctl.c
> @@ -183,9 +183,14 @@ ovn_dbctl_main(int argc, char *argv[],
>           }
>           daemon_mode = true;
>       }
> +
>       /* Initialize IDL. */
>       idl = the_idl = ovsdb_idl_create_unconnected(dbctl_options->idl_class,
> -                                                 daemon_mode);
> +                                                 true);
> +    if (dbctl_options->pre_idl_run) {
> +        dbctl_options->pre_idl_run(idl);
> +    }
> +
>       ovsdb_idl_set_shuffle_remotes(idl, shuffle_remotes);
>       /* "retry" is true iff in daemon mode. */
>       ovsdb_idl_set_remote(idl, db, daemon_mode);
> diff --git a/utilities/ovn-dbctl.h b/utilities/ovn-dbctl.h
> index a1fbede6b5..6c97590409 100644
> --- a/utilities/ovn-dbctl.h
> +++ b/utilities/ovn-dbctl.h
> @@ -43,7 +43,8 @@ struct ovn_dbctl_options {
>       const struct ctl_command_syntax *commands;
>   
>       void (*usage)(void);
> -
> +    void (*pre_idl_run)(struct ovsdb_idl *);
> +    void (*pre_idl_destroy)(struct ovsdb_idl *);
>       void (*add_base_prerequisites)(struct ovsdb_idl *, enum nbctl_wait_type);
>       void (*pre_execute)(struct ovsdb_idl *, struct ovsdb_idl_txn *,
>                           enum nbctl_wait_type);
> diff --git a/utilities/ovn-sbctl.c b/utilities/ovn-sbctl.c
> index 4146384e74..de28a1129b 100644
> --- a/utilities/ovn-sbctl.c
> +++ b/utilities/ovn-sbctl.c
> @@ -43,8 +43,11 @@
>   #include "openvswitch/shash.h"
>   #include "openvswitch/vconn.h"
>   #include "openvswitch/vlog.h"
> +#include "lib/lb.h"
> +#include "lib/ldata.h"
>   #include "lib/ovn-sb-idl.h"
>   #include "lib/ovn-util.h"
> +#include "lib/lflow.h"
>   #include "memory.h"
>   #include "ovn-dbctl.h"
>   #include "ovsdb-data.h"
> @@ -143,6 +146,25 @@ Other options:\n\
>       stream_usage("database", true, true, true);
>       exit(EXIT_SUCCESS);
>   }
> +
> +static struct ovsdb_idl_index *sbrec_datapath_binding_by_key;
> +static struct ovsdb_idl_index *sbrec_port_binding_by_datapath;
> +static struct ovsdb_idl_index *sbrec_port_binding_by_name;
> +
> +static void
> +sbctl_pre_idl_run(struct ovsdb_idl *sb_idl)
> +{
> +    sbrec_datapath_binding_by_key
> +        = ovsdb_idl_index_create1(sb_idl,
> +                                  &sbrec_datapath_binding_col_tunnel_key);
> +    sbrec_port_binding_by_datapath
> +        = ovsdb_idl_index_create1(sb_idl,
> +                                  &sbrec_port_binding_col_datapath);
> +    sbrec_port_binding_by_name
> +        = ovsdb_idl_index_create1(sb_idl,
> +                                  &sbrec_port_binding_col_logical_port);
> +}
> +
>   

>   /* One should not use ctl_fatal() within commands because it will kill the
>    * daemon if we're in daemon mode.  Use ctl_error() instead and return
> @@ -303,9 +325,13 @@ pre_get_info(struct ctl_context *ctx)
>   
>       ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_logical_port);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_tunnel_key);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_mac);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_port_security);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_options);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_chassis);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_datapath);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_up);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_port_binding_col_type);
>   
>       ovsdb_idl_add_column(ctx->idl, &sbrec_logical_flow_col_logical_datapath);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_logical_flow_col_logical_dp_group);
> @@ -319,6 +345,9 @@ pre_get_info(struct ctl_context *ctx)
>       ovsdb_idl_add_column(ctx->idl, &sbrec_logical_dp_group_col_datapaths);
>   
>       ovsdb_idl_add_column(ctx->idl, &sbrec_datapath_binding_col_external_ids);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_datapath_binding_col_options);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_datapath_binding_col_tunnel_key);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_datapath_binding_col_load_balancers);
>   
>       ovsdb_idl_add_column(ctx->idl, &sbrec_ip_multicast_col_datapath);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_ip_multicast_col_seq_no);
> @@ -337,6 +366,8 @@ pre_get_info(struct ctl_context *ctx)
>       ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_vips);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_name);
>       ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_protocol);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_options);
> +    ovsdb_idl_add_column(ctx->idl, &sbrec_load_balancer_col_selection_fields);
>   }
>   
>   static struct cmd_show_table cmd_show_tables[] = {
> @@ -1039,6 +1070,227 @@ cmd_lflow_list(struct ctl_context *ctx)
>       free(lflows);
>   }
>   
> +static int
> +sbctl_gen_lflow_cmp(const void *a_, const void *b_)
> +{
> +    const struct ovn_ctrl_lflow *const *ap = a_;
> +    const struct ovn_ctrl_lflow *const *bp = b_;
> +
> +    const struct ovn_ctrl_lflow *a = *ap;
> +    const struct ovn_ctrl_lflow *b = *bp;
> +
> +    int a_pipeline = ovn_stage_get_pipeline(a->stage);
> +    int b_pipeline = ovn_stage_get_pipeline(b->stage);
> +    int a_table_id = ovn_stage_get_table(a->stage);
> +    int b_table_id = ovn_stage_get_table(b->stage);
> +    int cmp = (a_pipeline > b_pipeline ? 1
> +               : a_pipeline < b_pipeline ? -1
> +               : a_table_id > b_table_id ? 1
> +               : a_table_id < b_table_id ? -1
> +               : a->priority > b->priority ? -1
> +               : a->priority < b->priority ? 1
> +               : strcmp(a->match, b->match));
> +    return cmp ? cmp : strcmp(a->actions, b->actions);
> +}
> +
> +static void
> +ctrl_lflow_list(struct hmap *lflows, struct hmap *gen_flows, struct hmap *lbs,
> +                const struct sbrec_datapath_binding *dp, bool is_switch,
> +                bool print_uuid, struct ctl_context *ctx)
> +{
> +    size_t n_total_flows = hmap_count(lflows) + hmap_count(gen_flows);
> +
> +    for (size_t j = 0; j < dp->n_load_balancers; j++) {
> +        struct local_load_balancer *local_lb =
> +            local_load_balancer_get(lbs, &dp->load_balancers[j]->header_.uuid);
> +        ovs_assert(local_lb);
> +
> +        n_total_flows += is_switch ? hmap_count(local_lb->active_lswitch_lflows) :
> +                         hmap_count(local_lb->active_lrouter_lflows);
> +    }
> +
> +    struct ovn_ctrl_lflow **ctrl_lflows =
> +        xmalloc(n_total_flows * sizeof *ctrl_lflows);
> +
> +    struct ovn_ctrl_lflow *f;
> +    size_t i = 0;
> +    HMAP_FOR_EACH (f, hmap_node, lflows) {
> +        ctrl_lflows[i++] = f;
> +    }
> +
> +    HMAP_FOR_EACH (f, hmap_node, gen_flows) {
> +        ctrl_lflows[i++] = f;
> +    }
> +
> +    for (size_t j = 0; j < dp->n_load_balancers; j++) {
> +        struct local_load_balancer *local_lb =
> +            local_load_balancer_get(lbs, &dp->load_balancers[j]->header_.uuid);
> +        ovs_assert(local_lb);
> +
> +        struct hmap *lb_flows = is_switch ? local_lb->active_lswitch_lflows :
> +                                local_lb->active_lrouter_lflows;
> +
> +        HMAP_FOR_EACH (f, hmap_node, lb_flows) {
> +            ctrl_lflows[i++] = f;
> +        }
> +    }
> +
> +    ovs_assert(i == n_total_flows);
> +
> +    qsort(ctrl_lflows, n_total_flows, sizeof *ctrl_lflows,
> +          sbctl_gen_lflow_cmp);
> +
> +    const struct ovn_ctrl_lflow *curr, *prev = NULL;
> +    for (i = 0; i < n_total_flows; i++) {
> +        curr = ctrl_lflows[i];
> +
> +        /* Print a header line for this datapath or pipeline, if we haven't
> +         * already done so. */
> +        if (!prev
> +            || ovn_stage_get_pipeline(curr->stage) !=
> +                ovn_stage_get_pipeline(prev->stage)) {
> +            ds_put_cstr(&ctx->output, "Datapath: ");
> +            print_datapath_name(dp, &ctx->output);
> +            ds_put_format(&ctx->output, " ("UUID_FMT")  Pipeline: %s\n",
> +                   UUID_ARGS(&dp->header_.uuid),
> +                   ovn_stage_get_pipeline(curr->stage) == P_IN ?
> +                   "ingress" : "egress");
> +        }
> +
> +        /* Print the flow. */
> +        ds_put_cstr(&ctx->output, "  ");
> +        print_uuid_part(&curr->uuid_, print_uuid, &ctx->output);
> +        ds_put_format(
> +            &ctx->output, "table=%-2"PRId8"(%-19s), priority=%-5"PRId16
> +            ", match=(%s), action=(%s)\n",
> +            ovn_stage_get_table(curr->stage),
> +            ovn_stage_to_str(curr->stage),
> +            curr->priority, curr->match,
> +            curr->actions);
> +        prev = curr;
> +    }
> +
> +    free(ctrl_lflows);
> +}
> +
> +static void
> +build_sbctl_datapaths(struct hmap *datapaths,
> +                      struct hmap *lbs,
> +                      const struct sbrec_datapath_binding *dp,
> +                      struct ctl_context *ctx)
> +{
> +    if (!dp) {
> +        SBREC_DATAPATH_BINDING_FOR_EACH (dp, ctx->idl) {
> +           local_datapath_add(datapaths, dp, sbrec_datapath_binding_by_key,
> +                              sbrec_port_binding_by_datapath,
> +                              sbrec_port_binding_by_name, NULL, NULL);
> +        }
> +    } else {
> +        local_datapath_add(datapaths, dp, sbrec_datapath_binding_by_key,
> +                           sbrec_port_binding_by_datapath,
> +                           sbrec_port_binding_by_name, NULL, NULL);
> +    }
> +
> +    const struct sbrec_port_binding *pb;
> +    SBREC_PORT_BINDING_FOR_EACH (pb, ctx->idl) {
> +        struct local_datapath *ldp =
> +            get_local_datapath(datapaths, pb->datapath->tunnel_key);
> +        if (!ldp) {
> +            continue;
> +        }
> +
> +        local_datapath_add_lport(ldp, pb->logical_port, pb);
> +    }
> +
> +    const struct sbrec_load_balancer *sb_lb;
> +    SBREC_LOAD_BALANCER_FOR_EACH (sb_lb, ctx->idl) {
> +        local_load_balancer_add(lbs, datapaths, sb_lb);
> +    }
> +}
> +
> +static void
> +cmd_ctrl_lflow_list(struct ctl_context *ctx)
> +{
> +    struct hmap gen_lswitch_flows = HMAP_INITIALIZER(&gen_lswitch_flows);
> +    struct hmap gen_lrouter_flows = HMAP_INITIALIZER(&gen_lrouter_flows);
> +
> +    build_lswitch_generic_lflows(&gen_lswitch_flows);
> +    build_lrouter_generic_lflows(&gen_lrouter_flows);
> +
> +    bool print_uuid = shash_find(&ctx->options, "--uuid") != NULL;
> +
> +    const struct sbrec_datapath_binding *dp = NULL;
> +    if (ctx->argc > 1) {
> +        const struct ovsdb_idl_row *row;
> +        char *error = ctl_get_row(ctx, &sbrec_table_datapath_binding,
> +                                  ctx->argv[1], false, &row);
> +        if (error) {
> +            ctl_error(ctx, "%s", error);
> +            free(error);
> +            return;
> +        }
> +
> +        dp = (const struct sbrec_datapath_binding *)row;
> +        if (dp) {
> +            ctx->argc--;
> +            ctx->argv++;
> +        }
> +    }
> +
> +    struct hmap datapaths = HMAP_INITIALIZER(&datapaths);
> +    struct hmap lbs = HMAP_INITIALIZER(&lbs);
> +    build_sbctl_datapaths(&datapaths, &lbs, dp, ctx);
> +
> +    struct local_datapath *ldp;
> +    HMAP_FOR_EACH (ldp, hmap_node, &datapaths) {
> +        ovn_ctrl_lflows_build_dp_lflows(&ldp->ctrl_lflows[0], ldp);
> +
> +        struct shash_node *shash_node;
> +        SHASH_FOR_EACH (shash_node, &ldp->lports) {
> +            local_lport_update_cache(shash_node->data);
> +            ovn_ctrl_build_lport_lflows(&ldp->ctrl_lflows[0],
> +                                        shash_node->data);
> +        }
> +    }
> +
> +    struct local_load_balancer *lb;
> +    HMAP_FOR_EACH (lb, hmap_node, &lbs) {
> +        ovn_ctrl_build_lb_lflows(lb->active_lswitch_lflows,
> +                                 lb->active_lrouter_lflows,
> +                                 lb->ovn_lb);
> +    }
> +
> +    HMAP_FOR_EACH (ldp, hmap_node, &datapaths) {
> +        struct ovn_ctrl_lflow *lflow, *next;
> +        HMAP_FOR_EACH_SAFE (lflow, next, hmap_node, &ldp->ctrl_lflows[0]) {
> +            if (lflow->dp_key && lflow->dp_key != ldp->datapath->tunnel_key ) {
> +                hmap_remove(&ldp->ctrl_lflows[0], &lflow->hmap_node);
> +                struct local_datapath *other_ldp =
> +                    get_local_datapath(&datapaths, lflow->dp_key);
> +                if (other_ldp) {
> +                    hmap_insert(&other_ldp->ctrl_lflows[0], &lflow->hmap_node,
> +                                ovn_ctrl_lflow_hash(lflow));
> +                }
> +            }
> +        }
> +    }
> +
> +    HMAP_FOR_EACH (ldp, hmap_node, &datapaths) {
> +        if (dp && ldp->datapath != dp) {
> +            continue;
> +        }
> +
> +        ctrl_lflow_list(&ldp->ctrl_lflows[0], ldp->is_switch ?
> +                        &gen_lswitch_flows : &gen_lrouter_flows, &lbs,
> +                        ldp->datapath, ldp->is_switch, print_uuid, ctx);
> +    }
> +
> +    local_datapaths_destroy(&datapaths);
> +    local_load_balancers_destroy(&lbs);
> +    ovn_ctrl_lflows_destroy(&gen_lswitch_flows);
> +    ovn_ctrl_lflows_destroy(&gen_lrouter_flows);
> +}
> +
>   static void
>   sbctl_ip_mcast_flush_switch(struct ctl_context *ctx,
>                               const struct sbrec_datapath_binding *dp)
> @@ -1387,6 +1639,9 @@ static const struct ctl_command_syntax sbctl_commands[] = {
>        pre_get_info, cmd_lflow_list, NULL,
>        "--uuid,--ovs?,--stats,--vflows?",
>        RO}, /* Friendly alias for lflow-list */
> +    {"ctrl-lflow-list", 0, INT_MAX, "[DATAPATH] [LFLOW...]",
> +     pre_get_info, cmd_ctrl_lflow_list, NULL,
> +     "--uuid,--ovs?,--stats,--vflows?", RO},
>   
>       /* IP multicast commands. */
>       {"ip-multicast-flush", 0, 1, "SWITCH",
> @@ -1425,6 +1680,7 @@ main(int argc, char *argv[])
>           .commands = sbctl_commands,
>   
>           .usage = sbctl_usage,
> +        .pre_idl_run = sbctl_pre_idl_run,
>           .add_base_prerequisites = sbctl_add_base_prerequisites,
>           .pre_execute = sbctl_pre_execute,
>           .post_execute = NULL,
> 




More information about the dev mailing list