[ovs-dev] [PATCH ovn 2/3] northd: support for RouteTables in LRs

Numan Siddique numans at ovn.org
Mon Aug 30 22:54:41 UTC 2021


On Mon, Aug 30, 2021 at 5:48 PM Numan Siddique <numans at ovn.org> wrote:
>
> On Mon, Aug 30, 2021 at 5:25 PM Vladislav Odintsov <odivlad at gmail.com> wrote:
> >
> > Hi Numan,
> >
> > thanks for review.
> > While my answers are inline, I’ve got a counterquestion:
> >
> > After I’ve submitted this patch series, I’ve added support for route tables in OVN IC daemon.
> > Is it okay if I submit a new version with requested changes and support for interconnection as well?
> > I know it’s upcoming soft-freeze and as I’m new to project, so I don’t know if it is accepted for now.
>
> In my opinion you've submitted the patches before soft-freeze.  So it
> should be fine and can be considered
> for the upcoming release.
>
>
> >
> > Regards,
> > Vladislav Odintsov
> >
> > > On 30 Aug 2021, at 23:44, Numan Siddique <numans at ovn.org> wrote:
> > >
> > > On Mon, Aug 16, 2021 at 5:15 PM Vladislav Odintsov <odivlad at gmail.com <mailto:odivlad at gmail.com>> wrote:
> > >>
> > >> This patch extends Logical Router's routing functionality.
> > >> Now user may create multiple routing tables within a Logical Router
> > >> and assign them to Logical Router Ports.
> > >>
> > >> Traffic coming from Logical Router Port with assigned route_table
> > >> is checked against global routes if any (Logical_Router_Static_Routes
> > >> whith empty route_table field), next against directly connected routes
> > >> and then Logical_Router_Static_Routes with same route_table value as
> > >> in Logical_Router_Port options:route_table field.
> > >>
> > >> A new Logical Router ingress table #10 is added - IN_IP_ROUTING_PRE.
> > >> In this table packets which come from LRPs with configured
> > >> options:route_table field are checked against inport and in OVS
> > >> register 7 unique non-zero value identifying route table is written.
> > >>
> > >> Then in 11th table IN_IP_ROUTING routes which have non-empty
> > >> `route_table` field are added with additional match on reg7 value
> > >> associated with appropriate route_table.
> > >>
> > >> Signed-off-by: Vladislav Odintsov <odivlad at gmail.com>
> > >
> > > Hi Vladislav,
> > >
> > > Thanks for the patch.  Sorry for the late reviews.
> > >
> > > I've a few comments.  I didn't review the code completely.
> > >
> > > 1.  I think you can merge patch 2 and patch 3.  Without patch 3, the
> > > test cases fail.
> > >
> >
> > Ack.
> >
> > > 2.  ddlog implementation is missing.  Let us know if you need some
> > > help here.  It would be great
> > >    if you could add ddlog implementation.
> > >
> >
> > I’ll try to add implementation by myself, but it can take a lot of time, especially if I spent time changing approach. If I stuck I’ll need somebody’s help.
> >
> > > 3.  I see a few problems in the present approach.  The ID allocated
> > > for each route table entry may not
> > >    be consistent across OVN DB runs.  This  may not be a huge
> > > problem.  But in a scaled environment, adding
> > >    or deleting route table entry could cause the ids to be shuffled
> > > disrupting the datapath.
> > >
> > >    Instead of using the router table for each static route,  I'd
> > > suggest instead to add a new column
> > >    "inports" to Logical_Router_Static_Route table.  This column can
> > > be a set of strings and CMS can
> > >    add the inports for each static route.
> > >
> > >    Eg.  ovn-nbctl set logical_router_static_route <R1>
> > > inports="lrp-lr1-ls1, lrp-lr1-ls2"
> > >
> > >   And ovn-northd would add a logical flow like
> > >  table=11(lr_in_ip_routing   ), priority=65   , match=(inport ==
> > > {"lrp-lr1-ls1, lrp-lr1-ls2"} && ip4.dst == 1.1.1.1/32) action=(.....)
> > >
> > > CMS can also create a port group if desired.   With this, we don't
> > > have to generate the route table id and store it in the reg7.
> > > We also don't have to introduce a new stage - lr_in_ip_routing_pre.
> > > What do you think ?
> > >
> >
> > My first approach was almost same as you described. Just one difference in that each route could have only one inport.
> > I decided to rewrite it to support route table names, whichever user wants, because it was more comfortable for our CMS (non-openstack).
> >
> > IDs allocation logic I took from ECMP groups code. It’s the same.
> >
> > However, your approach has its advantages in:
> > - doesn’t have inconsistent ids for route tables
> > - doesn’t require additional OVS register
> > - doesn’t require additional stage (though, its not a big problem I guess)
> > - doesn’t disrupt datapath while topology changes.
> >
> > I’ve got questions here:
> > 1. Why ids would be inconsistent in large scale across different NB runs? Did you mean that ids can change while adding/removing routes and assigning route tables to LRPs?
> Lets say you have route tables - rt1, rt2, rt3 and rt4 with the same
> ids assigned by ovn-northd.
> Lets say you delete rt2,  then ovn-northd would reassign the ids again
> for rt1, rt3 and rt4.  The id assignment would totally
> depend on the order in which the route tables are loop though.  In
> this case,  assuming rt1 was assigned '1', the ids of rt3 and rt4
> would definitely change.
>
>
> > 2. Why will datapath be disrupted? When we add/remove route table, only its ids would be changed, so relevant OFs would be updated. How will this change datapath flows?
>
> Since we are deleting and readding the OFs with the updated register 7
> value,  this may disrupt if the datapath flow are evicted.  I don't
> know the internals though.
>
>
> > 3. Wouldn’t be datapath disrupted while changing inports list?
>
> Actually you're right.  The OF flows will be deleted and re-added as
> the logical flow will be deleted and re-added with the updated inports
> list.
> Only way to avoid is if CMS uses port groups.
>
> Seems to me using inports is better than allocating the id to the route tables.

On a second thought,  I think it's better to have just "inport" as a new column.
CMS can either set the name of the router port or a port group name.

Thanks
Numan

>
> Thanks
> Numan
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > > Numan
> > >
> > >
> > >
> > >> ---
> > >> northd/ovn-northd.8.xml |  63 ++++--
> > >> northd/ovn-northd.c     | 198 +++++++++++++++---
> > >> ovn-nb.ovsschema        |   5 +-
> > >> ovn-nb.xml              |  30 +++
> > >> tests/ovn-northd.at     |  72 ++++++-
> > >> tests/ovn.at            | 438 +++++++++++++++++++++++++++++++++++++++-
> > >> 6 files changed, 751 insertions(+), 55 deletions(-)
> > >>
> > >> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> > >> index 9b69e4e57..2898d9240 100644
> > >> --- a/northd/ovn-northd.8.xml
> > >> +++ b/northd/ovn-northd.8.xml
> > >> @@ -2868,7 +2868,7 @@ icmp6 {
> > >>
> > >>     <p>
> > >>       If ECMP routes with symmetric reply are configured in the
> > >> -      <code>OVN_Northbound</code> database for a gateway router, a priority-300
> > >> +      <code>OVN_Northbound</code> database for a gateway router, a priority-400
> > >>       flow is added for each router port on which symmetric replies are
> > >>       configured. The matching logic for these ports essentially reverses the
> > >>       configured logic of the ECMP route. So for instance, a route with a
> > >> @@ -3213,7 +3213,35 @@ output;
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 10: IP Routing</h3>
> > >> +    <h3>Ingress Table 10: IP Routing Pre</h3>
> > >> +
> > >> +    <p>
> > >> +      If a packet arrived at this table from Logical Router Port <var>P</var>
> > >> +      which has <code>options:route_table</code> value set, a logical flow with
> > >> +      match <code>inport == "<var>P</var>"</code> with priority 100 and action,
> > >> +      setting unique-generated per-datapath 32-bit value (non-zero) in OVS
> > >> +      register 7.  This register is checked in next table.
> > >> +    </p>
> > >> +
> > >> +    <p>
> > >> +      This table contains the following logical flows:
> > >> +    </p>
> > >> +
> > >> +    <ul>
> > >> +      <li>
> > >> +        <p>
> > >> +          Priority-100 flow with match <code>inport == "LRP_NAME"</code> value
> > >> +          and action, which set route table identifier in reg7.
> > >> +        </p>
> > >> +
> > >> +        <p>
> > >> +          A priority-0 logical flow with match <code>1</code> has actions
> > >> +          <code>next;</code>.
> > >> +        </p>
> > >> +      </li>
> > >> +    </ul>
> > >> +
> > >> +    <h3>Ingress Table 11: IP Routing</h3>
> > >>
> > >>     <p>
> > >>       A packet that arrives at this table is an IP packet that should be
> > >> @@ -3284,10 +3312,10 @@ output;
> > >>         <p>
> > >>           IPv4 routing table.  For each route to IPv4 network <var>N</var> with
> > >>           netmask <var>M</var>, on router port <var>P</var> with IP address
> > >> -          <var>A</var> and Ethernet
> > >> -          address <var>E</var>, a logical flow with match <code>ip4.dst ==
> > >> -          <var>N</var>/<var>M</var></code>, whose priority is the number of
> > >> -          1-bits in <var>M</var>, has the following actions:
> > >> +          <var>A</var> and Ethernet address <var>E</var>, a logical flow with
> > >> +          match <code>ip4.dst == <var>N</var>/<var>M</var></code>, whose
> > >> +          priority is 100 + the number of 1-bits in <var>M</var>, has the
> > >> +          following actions:
> > >>         </p>
> > >>
> > >>         <pre>
> > >> @@ -3350,6 +3378,13 @@ next;
> > >>           If the address <var>A</var> is in the link-local scope, the
> > >>           route will be limited to sending on the ingress port.
> > >>         </p>
> > >> +
> > >> +        <p>
> > >> +          For routes with <code>route_table</code> value set
> > >> +          <code>reg7 == id</code> is prefixed in logical flow match portion.
> > >> +          Priority for routes with <code>route_table</code> value set is
> > >> +          the number of 1-bits in <var>M</var>.
> > >> +        </p>
> > >>       </li>
> > >>
> > >>       <li>
> > >> @@ -3376,7 +3411,7 @@ select(reg8[16..31], <var>MID1</var>, <var>MID2</var>, ...);
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 11: IP_ROUTING_ECMP</h3>
> > >> +    <h3>Ingress Table 12: IP_ROUTING_ECMP</h3>
> > >>
> > >>     <p>
> > >>       This table implements the second part of IP routing for ECMP routes
> > >> @@ -3428,7 +3463,7 @@ outport = <var>P</var>;
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 12: Router policies</h3>
> > >> +    <h3>Ingress Table 13: Router policies</h3>
> > >>     <p>
> > >>       This table adds flows for the logical router policies configured
> > >>       on the logical router. Please see the
> > >> @@ -3500,7 +3535,7 @@ next;
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 13: ECMP handling for router policies</h3>
> > >> +    <h3>Ingress Table 14: ECMP handling for router policies</h3>
> > >>     <p>
> > >>       This table handles the ECMP for the router policies configured
> > >>       with multiple nexthops.
> > >> @@ -3544,7 +3579,7 @@ outport = <var>P</var>
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 14: ARP/ND Resolution</h3>
> > >> +    <h3>Ingress Table 15: ARP/ND Resolution</h3>
> > >>
> > >>     <p>
> > >>       Any packet that reaches this table is an IP packet whose next-hop
> > >> @@ -3735,7 +3770,7 @@ outport = <var>P</var>
> > >>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 15: Check packet length</h3>
> > >> +    <h3>Ingress Table 16: Check packet length</h3>
> > >>
> > >>     <p>
> > >>       For distributed logical routers or gateway routers with gateway
> > >> @@ -3765,7 +3800,7 @@ REGBIT_PKT_LARGER = check_pkt_larger(<var>L</var>); next;
> > >>       and advances to the next table.
> > >>     </p>
> > >>
> > >> -    <h3>Ingress Table 16: Handle larger packets</h3>
> > >> +    <h3>Ingress Table 17: Handle larger packets</h3>
> > >>
> > >>     <p>
> > >>       For distributed logical routers or gateway routers with gateway port
> > >> @@ -3828,7 +3863,7 @@ icmp6 {
> > >>       and advances to the next table.
> > >>     </p>
> > >>
> > >> -    <h3>Ingress Table 17: Gateway Redirect</h3>
> > >> +    <h3>Ingress Table 18: Gateway Redirect</h3>
> > >>
> > >>     <p>
> > >>       For distributed logical routers where one or more of the logical router
> > >> @@ -3875,7 +3910,7 @@ icmp6 {
> > >>       </li>
> > >>     </ul>
> > >>
> > >> -    <h3>Ingress Table 18: ARP Request</h3>
> > >> +    <h3>Ingress Table 19: ARP Request</h3>
> > >>
> > >>     <p>
> > >>       In the common case where the Ethernet destination has been resolved, this
> > >> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> > >> index e80876af1..f35699bbb 100644
> > >> --- a/northd/ovn-northd.c
> > >> +++ b/northd/ovn-northd.c
> > >> @@ -197,15 +197,16 @@ enum ovn_stage {
> > >>     PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7, "lr_in_ecmp_stateful") \
> > >>     PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8, "lr_in_nd_ra_options") \
> > >>     PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  9, "lr_in_nd_ra_response") \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      10, "lr_in_ip_routing")   \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 11, "lr_in_ip_routing_ecmp") \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          12, "lr_in_policy")       \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     13, "lr_in_policy_ecmp")  \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     14, "lr_in_arp_resolve")  \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN   ,  15, "lr_in_chk_pkt_len")  \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     16, "lr_in_larger_pkts")  \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     17, "lr_in_gw_redirect")  \
> > >> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     18, "lr_in_arp_request")  \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  10, "lr_in_ip_routing_pre")  \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      11, "lr_in_ip_routing")      \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 12, "lr_in_ip_routing_ecmp") \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          13, "lr_in_policy")          \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     14, "lr_in_policy_ecmp")     \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     15, "lr_in_arp_resolve")     \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     16, "lr_in_chk_pkt_len")     \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     17, "lr_in_larger_pkts")     \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     18, "lr_in_gw_redirect")     \
> > >> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     19, "lr_in_arp_request")     \
> > >>                                                                       \
> > >>     /* Logical router egress stages. */                               \
> > >>     PIPELINE_STAGE(ROUTER, OUT, UNDNAT,      0, "lr_out_undnat")        \
> > >> @@ -273,6 +274,7 @@ enum ovn_stage {
> > >> #define REG_NEXT_HOP_IPV6 "xxreg0"
> > >> #define REG_SRC_IPV4 "reg1"
> > >> #define REG_SRC_IPV6 "xxreg1"
> > >> +#define REG_ROUTE_TABLE_ID "reg7"
> > >>
> > >> /* Register used for setting a label for ACLs in a Logical Switch. */
> > >> #define REG_LABEL "reg3"
> > >> @@ -333,8 +335,9 @@ enum ovn_stage {
> > >>  * | R6  |        UNUSED            | X |                 | G | IN_IP_ROUTING)|
> > >>  * |     |                          | R |                 | 1 |               |
> > >>  * +-----+--------------------------+ E |     UNUSED      |   |               |
> > >> - * | R7  |        UNUSED            | G |                 |   |               |
> > >> - * |     |                          | 3 |                 |   |               |
> > >> + * | R7  |      ROUTE_TABLE_ID      | G |                 |   |               |
> > >> + * |     | (>= IN_IP_ROUTING_PRE && | 3 |                 |   |               |
> > >> + * |     |  <= IN_IP_ROUTING)       | 3 |                 |   |               |
> > >>  * +-----+--------------------------+---+-----------------+---+---------------+
> > >>  * | R8  |     ECMP_GROUP_ID        |   |                 |
> > >>  * |     |     ECMP_MEMBER_ID       | X |                 |
> > >> @@ -8410,11 +8413,110 @@ cleanup:
> > >>     ds_destroy(&actions);
> > >> }
> > >>
> > >> +struct route_table_node {
> > >> +    struct hmap_node hmap_node; /* In route_tables */
> > >> +    uint32_t id; /* starts from 1 */
> > >> +    const char *name;
> > >> +};
> > >> +
> > >> +static uint32_t
> > >> +get_route_table_hash(const char *route_table_name)
> > >> +{
> > >> +    return hash_string(route_table_name, 0);
> > >> +}
> > >> +
> > >> +static struct route_table_node *
> > >> +route_table_find(struct hmap *route_tables, const char *route_table_name)
> > >> +{
> > >> +    struct route_table_node *rtb;
> > >> +    uint32_t hash = get_route_table_hash(route_table_name);
> > >> +
> > >> +    HMAP_FOR_EACH_WITH_HASH (rtb, hmap_node, hash, route_tables) {
> > >> +        if (!strcmp(rtb->name, route_table_name)) {
> > >> +            return rtb;
> > >> +        }
> > >> +    }
> > >> +    return NULL;
> > >> +}
> > >> +
> > >> +static struct route_table_node *
> > >> +route_table_add(struct hmap *route_tables, const char *route_table_name)
> > >> +{
> > >> +    if (hmap_count(route_tables) == UINT16_MAX) {
> > >> +        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
> > >> +        VLOG_WARN_RL(&rl, "too many route tables for Logical Router.");
> > >> +        return NULL;
> > >> +    }
> > >> +
> > >> +    struct route_table_node *rtb = xzalloc(sizeof *rtb);
> > >> +    uint32_t hash = get_route_table_hash(route_table_name);
> > >> +    hmap_insert(route_tables, &rtb->hmap_node, hash);
> > >> +
> > >> +    rtb->id = hmap_count(route_tables);
> > >> +    rtb->name = route_table_name;
> > >> +
> > >> +    return rtb;
> > >> +}
> > >> +
> > >> +static uint32_t
> > >> +get_route_table_id(struct hmap *route_tables, const char *route_table_name)
> > >> +{
> > >> +    struct route_table_node *rtb;
> > >> +
> > >> +    if (!route_table_name || !strlen(route_table_name)) {
> > >> +        return 0;
> > >> +    }
> > >> +
> > >> +    rtb = route_table_find(route_tables, route_table_name);
> > >> +    if (!rtb) {
> > >> +        rtb = route_table_add(route_tables, route_table_name);
> > >> +    }
> > >> +
> > >> +    return rtb->id;
> > >> +}
> > >> +
> > >> +static void
> > >> +route_tables_destroy(struct hmap *route_tables)
> > >> +{
> > >> +    struct route_table_node *rtb, *next;
> > >> +    HMAP_FOR_EACH_SAFE (rtb, next, hmap_node, route_tables) {
> > >> +        hmap_remove(route_tables, &rtb->hmap_node);
> > >> +        free(rtb);
> > >> +    }
> > >> +    hmap_destroy(route_tables);
> > >> +}
> > >> +
> > >> +static void
> > >> +build_route_table_lflow(struct ovn_datapath *od, struct hmap *lflows,
> > >> +                        struct nbrec_logical_router_port *lrp,
> > >> +                        struct hmap *route_tables)
> > >> +{
> > >> +    struct ds match = DS_EMPTY_INITIALIZER;
> > >> +    struct ds actions = DS_EMPTY_INITIALIZER;
> > >> +
> > >> +    const char *route_table_name = smap_get(&lrp->options, "route_table");
> > >> +    uint32_t rtb_id = get_route_table_id(route_tables, route_table_name);
> > >> +    if (!rtb_id) {
> > >> +        return;
> > >> +    }
> > >> +
> > >> +    ds_put_format(&match, "inport == \"%s\"", lrp->name);
> > >> +    ds_put_format(&actions, "%s = %d; next;",
> > >> +                  REG_ROUTE_TABLE_ID, rtb_id);
> > >> +
> > >> +    ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 100,
> > >> +                  ds_cstr(&match), ds_cstr(&actions));
> > >> +
> > >> +    ds_destroy(&match);
> > >> +    ds_destroy(&actions);
> > >> +}
> > >> +
> > >> struct parsed_route {
> > >>     struct ovs_list list_node;
> > >>     struct in6_addr prefix;
> > >>     unsigned int plen;
> > >>     bool is_src_route;
> > >> +    uint32_t route_table_id;
> > >>     uint32_t hash;
> > >>     const struct nbrec_logical_router_static_route *route;
> > >>     bool ecmp_symmetric_reply;
> > >> @@ -8439,7 +8541,7 @@ find_static_route_outport(struct ovn_datapath *od, struct hmap *ports,
> > >>  * Otherwise return NULL. */
> > >> static struct parsed_route *
> > >> parsed_routes_add(struct ovn_datapath *od, struct hmap *ports,
> > >> -                  struct ovs_list *routes,
> > >> +                  struct ovs_list *routes,  struct hmap *route_tables,
> > >>                   const struct nbrec_logical_router_static_route *route,
> > >>                   struct hmap *bfd_connections)
> > >> {
> > >> @@ -8520,6 +8622,7 @@ parsed_routes_add(struct ovn_datapath *od, struct hmap *ports,
> > >>     struct parsed_route *pr = xzalloc(sizeof *pr);
> > >>     pr->prefix = prefix;
> > >>     pr->plen = plen;
> > >> +    pr->route_table_id = get_route_table_id(route_tables, route->route_table);
> > >>     pr->is_src_route = (route->policy && !strcmp(route->policy,
> > >>                                                  "src-ip"));
> > >>     pr->hash = route_hash(pr);
> > >> @@ -8553,6 +8656,7 @@ struct ecmp_groups_node {
> > >>     struct in6_addr prefix;
> > >>     unsigned int plen;
> > >>     bool is_src_route;
> > >> +    uint32_t route_table_id;
> > >>     uint16_t route_count;
> > >>     struct ovs_list route_list; /* Contains ecmp_route_list_node */
> > >> };
> > >> @@ -8561,7 +8665,7 @@ static void
> > >> ecmp_groups_add_route(struct ecmp_groups_node *group,
> > >>                       const struct parsed_route *route)
> > >> {
> > >> -   if (group->route_count == UINT16_MAX) {
> > >> +    if (group->route_count == UINT16_MAX) {
> > >>         static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1);
> > >>         VLOG_WARN_RL(&rl, "too many routes in a single ecmp group.");
> > >>         return;
> > >> @@ -8590,6 +8694,7 @@ ecmp_groups_add(struct hmap *ecmp_groups,
> > >>     eg->prefix = route->prefix;
> > >>     eg->plen = route->plen;
> > >>     eg->is_src_route = route->is_src_route;
> > >> +    eg->route_table_id = route->route_table_id;
> > >>     ovs_list_init(&eg->route_list);
> > >>     ecmp_groups_add_route(eg, route);
> > >>
> > >> @@ -8603,7 +8708,8 @@ ecmp_groups_find(struct hmap *ecmp_groups, struct parsed_route *route)
> > >>     HMAP_FOR_EACH_WITH_HASH (eg, hmap_node, route->hash, ecmp_groups) {
> > >>         if (ipv6_addr_equals(&eg->prefix, &route->prefix) &&
> > >>             eg->plen == route->plen &&
> > >> -            eg->is_src_route == route->is_src_route) {
> > >> +            eg->is_src_route == route->is_src_route &&
> > >> +            eg->route_table_id == route->route_table_id) {
> > >>             return eg;
> > >>         }
> > >>     }
> > >> @@ -8650,7 +8756,8 @@ unique_routes_remove(struct hmap *unique_routes,
> > >>     HMAP_FOR_EACH_WITH_HASH (ur, hmap_node, route->hash, unique_routes) {
> > >>         if (ipv6_addr_equals(&route->prefix, &ur->route->prefix) &&
> > >>             route->plen == ur->route->plen &&
> > >> -            route->is_src_route == ur->route->is_src_route) {
> > >> +            route->is_src_route == ur->route->is_src_route &&
> > >> +            route->route_table_id == ur->route->route_table_id) {
> > >>             hmap_remove(unique_routes, &ur->hmap_node);
> > >>             const struct parsed_route *existed_route = ur->route;
> > >>             free(ur);
> > >> @@ -8688,9 +8795,9 @@ build_route_prefix_s(const struct in6_addr *prefix, unsigned int plen)
> > >> }
> > >>
> > >> static void
> > >> -build_route_match(const struct ovn_port *op_inport, const char *network_s,
> > >> -                  int plen, bool is_src_route, bool is_ipv4, struct ds *match,
> > >> -                  uint16_t *priority)
> > >> +build_route_match(const struct ovn_port *op_inport, const uint32_t rtb_id,
> > >> +                  const char *network_s, int plen, bool is_src_route,
> > >> +                  bool is_ipv4, struct ds *match, uint16_t *priority)
> > >> {
> > >>     const char *dir;
> > >>     /* The priority here is calculated to implement longest-prefix-match
> > >> @@ -8706,6 +8813,16 @@ build_route_match(const struct ovn_port *op_inport, const char *network_s,
> > >>     if (op_inport) {
> > >>         ds_put_format(match, "inport == %s && ", op_inport->json_key);
> > >>     }
> > >> +    if (rtb_id) {
> > >> +        ds_put_format(match, "%s == %d && ", REG_ROUTE_TABLE_ID, rtb_id);
> > >> +    }
> > >> +    else {
> > >> +        /* Route-table assigned LRPs' routes should have lower priority
> > >> +         * in order not to affect directly-connected global routes.
> > >> +         * So, enlarge non-route-table routes priority by 100.
> > >> +         */
> > >> +        *priority += 100;
> > >> +    }
> > >>     ds_put_format(match, "ip%s.%s == %s/%d", is_ipv4 ? "4" : "6", dir,
> > >>                   network_s, plen);
> > >> }
> > >> @@ -8840,7 +8957,7 @@ add_ecmp_symmetric_reply_flows(struct hmap *lflows,
> > >>                   out_port->lrp_networks.ea_s,
> > >>                   IN6_IS_ADDR_V4MAPPED(&route->prefix) ? "" : "xx",
> > >>                   port_ip, out_port->json_key);
> > >> -    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, 300,
> > >> +    ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING, 400,
> > >>                            ds_cstr(&match), ds_cstr(&actions),
> > >>                            &st_route->header_);
> > >>
> > >> @@ -8870,8 +8987,8 @@ build_ecmp_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> > >>     struct ds route_match = DS_EMPTY_INITIALIZER;
> > >>
> > >>     char *prefix_s = build_route_prefix_s(&eg->prefix, eg->plen);
> > >> -    build_route_match(NULL, prefix_s, eg->plen, eg->is_src_route, is_ipv4,
> > >> -                      &route_match, &priority);
> > >> +    build_route_match(NULL, eg->route_table_id, prefix_s, eg->plen,
> > >> +                      eg->is_src_route, is_ipv4, &route_match, &priority);
> > >>     free(prefix_s);
> > >>
> > >>     struct ds actions = DS_EMPTY_INITIALIZER;
> > >> @@ -8946,8 +9063,8 @@ static void
> > >> add_route(struct hmap *lflows, struct ovn_datapath *od,
> > >>           const struct ovn_port *op, const char *lrp_addr_s,
> > >>           const char *network_s, int plen, const char *gateway,
> > >> -          bool is_src_route, const struct ovsdb_idl_row *stage_hint,
> > >> -          bool is_discard_route)
> > >> +          bool is_src_route, const uint32_t rtb_id,
> > >> +          const struct ovsdb_idl_row *stage_hint, bool is_discard_route)
> > >> {
> > >>     bool is_ipv4 = strchr(network_s, '.') ? true : false;
> > >>     struct ds match = DS_EMPTY_INITIALIZER;
> > >> @@ -8962,8 +9079,8 @@ add_route(struct hmap *lflows, struct ovn_datapath *od,
> > >>             op_inport = op;
> > >>         }
> > >>     }
> > >> -    build_route_match(op_inport, network_s, plen, is_src_route, is_ipv4,
> > >> -                      &match, &priority);
> > >> +    build_route_match(op_inport, rtb_id, network_s, plen, is_src_route,
> > >> +                      is_ipv4, &match, &priority);
> > >>
> > >>     struct ds common_actions = DS_EMPTY_INITIALIZER;
> > >>     struct ds actions = DS_EMPTY_INITIALIZER;
> > >> @@ -9026,7 +9143,8 @@ build_static_route_flow(struct hmap *lflows, struct ovn_datapath *od,
> > >>     char *prefix_s = build_route_prefix_s(&route_->prefix, route_->plen);
> > >>     add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port,
> > >>               lrp_addr_s, prefix_s, route_->plen, route->nexthop,
> > >> -              route_->is_src_route, &route->header_, route_->is_discard_route);
> > >> +              route_->is_src_route, route_->route_table_id, &route->header_,
> > >> +              route_->is_discard_route);
> > >>
> > >>     free(prefix_s);
> > >> }
> > >> @@ -10397,6 +10515,17 @@ build_ND_RA_flows_for_lrouter(struct ovn_datapath *od, struct hmap *lflows)
> > >>     }
> > >> }
> > >>
> > >> +/* Logical router ingress table IP_ROUTING_PRE:
> > >> + * by default goto next. (priority 0). */
> > >> +static void
> > >> +build_ip_routing_pre_flows_for_lrouter(struct ovn_datapath *od,
> > >> +                                       struct hmap *lflows)
> > >> +{
> > >> +    if (od->nbr) {
> > >> +        ovn_lflow_add(lflows, od, S_ROUTER_IN_IP_ROUTING_PRE, 0, "1", "next;");
> > >> +    }
> > >> +}
> > >> +
> > >> /* Logical router ingress table IP_ROUTING : IP Routing.
> > >>  *
> > >>  * A packet that arrives at this table is an IP packet that should be
> > >> @@ -10422,14 +10551,14 @@ build_ip_routing_flows_for_lrouter_port(
> > >>         for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> > >>             add_route(lflows, op->od, op, op->lrp_networks.ipv4_addrs[i].addr_s,
> > >>                       op->lrp_networks.ipv4_addrs[i].network_s,
> > >> -                      op->lrp_networks.ipv4_addrs[i].plen, NULL, false,
> > >> +                      op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0,
> > >>                       &op->nbrp->header_, false);
> > >>         }
> > >>
> > >>         for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) {
> > >>             add_route(lflows, op->od, op, op->lrp_networks.ipv6_addrs[i].addr_s,
> > >>                       op->lrp_networks.ipv6_addrs[i].network_s,
> > >> -                      op->lrp_networks.ipv6_addrs[i].plen, NULL, false,
> > >> +                      op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0,
> > >>                       &op->nbrp->header_, false);
> > >>         }
> > >>     } else if (lsp_is_router(op->nbsp)) {
> > >> @@ -10452,7 +10581,7 @@ build_ip_routing_flows_for_lrouter_port(
> > >>                     add_route(lflows, peer->od, peer,
> > >>                               peer->lrp_networks.ipv4_addrs[0].addr_s,
> > >>                               laddrs->ipv4_addrs[k].network_s,
> > >> -                              laddrs->ipv4_addrs[k].plen, NULL, false,
> > >> +                              laddrs->ipv4_addrs[k].plen, NULL, false, 0,
> > >>                               &peer->nbrp->header_, false);
> > >>                 }
> > >>             }
> > >> @@ -10472,10 +10601,17 @@ build_static_route_flows_for_lrouter(
> > >>         struct hmap ecmp_groups = HMAP_INITIALIZER(&ecmp_groups);
> > >>         struct hmap unique_routes = HMAP_INITIALIZER(&unique_routes);
> > >>         struct ovs_list parsed_routes = OVS_LIST_INITIALIZER(&parsed_routes);
> > >> +        struct hmap route_tables = HMAP_INITIALIZER(&route_tables);
> > >>         struct ecmp_groups_node *group;
> > >> +
> > >> +        for (int i = 0; i < od->nbr->n_ports; i++) {
> > >> +            build_route_table_lflow(od, lflows, od->nbr->ports[i],
> > >> +                                    &route_tables);
> > >> +        }
> > >> +
> > >>         for (int i = 0; i < od->nbr->n_static_routes; i++) {
> > >>             struct parsed_route *route =
> > >> -                parsed_routes_add(od, ports, &parsed_routes,
> > >> +                parsed_routes_add(od, ports, &parsed_routes, &route_tables,
> > >>                                   od->nbr->static_routes[i], bfd_connections);
> > >>             if (!route) {
> > >>                 continue;
> > >> @@ -10506,6 +10642,7 @@ build_static_route_flows_for_lrouter(
> > >>             build_static_route_flow(lflows, od, ports, ur->route);
> > >>         }
> > >>         ecmp_groups_destroy(&ecmp_groups);
> > >> +        route_tables_destroy(&route_tables);
> > >>         unique_routes_destroy(&unique_routes);
> > >>         parsed_routes_destroy(&parsed_routes);
> > >>     }
> > >> @@ -12604,6 +12741,7 @@ build_lswitch_and_lrouter_iterate_by_od(struct ovn_datapath *od,
> > >>     build_neigh_learning_flows_for_lrouter(od, lsi->lflows, &lsi->match,
> > >>                                            &lsi->actions, lsi->meter_groups);
> > >>     build_ND_RA_flows_for_lrouter(od, lsi->lflows);
> > >> +    build_ip_routing_pre_flows_for_lrouter(od, lsi->lflows);
> > >>     build_static_route_flows_for_lrouter(od, lsi->lflows, lsi->ports,
> > >>                                          lsi->bfd_connections);
> > >>     build_mcast_lookup_flows_for_lrouter(od, lsi->lflows, &lsi->match,
> > >> diff --git a/ovn-nb.ovsschema b/ovn-nb.ovsschema
> > >> index 2ac8ef3ea..a0a171e19 100644
> > >> --- a/ovn-nb.ovsschema
> > >> +++ b/ovn-nb.ovsschema
> > >> @@ -1,7 +1,7 @@
> > >> {
> > >>     "name": "OVN_Northbound",
> > >> -    "version": "5.32.1",
> > >> -    "cksum": "2805328215 29734",
> > >> +    "version": "5.33.1",
> > >> +    "cksum": "3874993350 29785",
> > >>     "tables": {
> > >>         "NB_Global": {
> > >>             "columns": {
> > >> @@ -387,6 +387,7 @@
> > >>             "isRoot": false},
> > >>         "Logical_Router_Static_Route": {
> > >>             "columns": {
> > >> +                "route_table": {"type": "string"},
> > >>                 "ip_prefix": {"type": "string"},
> > >>                 "policy": {"type": {"key": {"type": "string",
> > >>                                             "enum": ["set", ["src-ip",
> > >> diff --git a/ovn-nb.xml b/ovn-nb.xml
> > >> index c56ec62f6..bab80c39c 100644
> > >> --- a/ovn-nb.xml
> > >> +++ b/ovn-nb.xml
> > >> @@ -2758,6 +2758,14 @@
> > >>           prefix according to RFC3663
> > >>         </p>
> > >>       </column>
> > >> +
> > >> +      <column name="options" key="route_table">
> > >> +        Designates lookup Logical_Router_Static_Routes with specified
> > >> +        <code>route_table</code> value. Routes to directly connected networks
> > >> +        from same Logical Router and routes without <code>route_table</code>
> > >> +        option set have higher priority than routes with
> > >> +        <code>route_table</code> option set.
> > >> +      </column>
> > >>     </group>
> > >>
> > >>     <group title="Attachment">
> > >> @@ -2877,6 +2885,28 @@
> > >>       </p>
> > >>     </column>
> > >>
> > >> +    <column name="route_table">
> > >> +      <p>
> > >> +        Any string to place route to separate routing table. If Logical Router
> > >> +        Port has configured value in <ref table="Logical_Router_Port"
> > >> +        column="options" key="route_table"/> other than empty string, OVN
> > >> +        performs route lookup for all packets entering Logical Router ingress
> > >> +        pipeline from this port in the following manner:
> > >> +      </p>
> > >> +
> > >> +      <ul>
> > >> +        <li>
> > >> +          1. First lookup among "global" routes: routes without
> > >> +          <code>route_table</code> value set and routes to directly connected
> > >> +          networks.
> > >> +        </li>
> > >> +        <li>
> > >> +          2. Next lookup among routes with same <code>route_table</code> value
> > >> +          as specified in LRP's options:route_table field.
> > >> +        </li>
> > >> +      </ul>
> > >> +    </column>
> > >> +
> > >>     <column name="external_ids" key="ic-learned-route">
> > >>       <code>ovn-ic</code> populates this key if the route is learned from the
> > >>       global <ref db="OVN_IC_Southbound"/> database.  In this case the value
> > >> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> > >> index 5f41112bc..4ee2d18ee 100644
> > >> --- a/tests/ovn-northd.at
> > >> +++ b/tests/ovn-northd.at
> > >> @@ -5095,7 +5095,7 @@ check ovn-nbctl --wait=sb --ecmp-symmetric-reply lr-route-add lr0 1.0.0.1 192.16
> > >>
> > >> ovn-sbctl dump-flows lr0 > lr0flows
> > >> AT_CHECK([grep -e "lr_in_ip_routing.*select" lr0flows | sed 's/table=../table=??/' | sort], [0], [dnl
> > >> -  table=??(lr_in_ip_routing   ), priority=65   , match=(ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; reg8[[0..15]] = 1; reg8[[16..31]] = select(1, 2);)
> > >> +  table=??(lr_in_ip_routing   ), priority=165  , match=(ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; reg8[[0..15]] = 1; reg8[[16..31]] = select(1, 2);)
> > >> ])
> > >> AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | sed 's/192\.168\.0\..0/192.168.0.??/' | sed 's/table=../table=??/' | sort], [0], [dnl
> > >>   table=??(lr_in_ip_routing_ecmp), priority=100  , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;)
> > >> @@ -5108,7 +5108,7 @@ check ovn-nbctl --wait=sb --ecmp-symmetric-reply lr-route-add lr0 1.0.0.1 192.16
> > >>
> > >> ovn-sbctl dump-flows lr0 > lr0flows
> > >> AT_CHECK([grep -e "lr_in_ip_routing.*select" lr0flows | sed 's/table=../table=??/' | sort], [0], [dnl
> > >> -  table=??(lr_in_ip_routing   ), priority=65   , match=(ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; reg8[[0..15]] = 1; reg8[[16..31]] = select(1, 2);)
> > >> +  table=??(lr_in_ip_routing   ), priority=165  , match=(ip4.dst == 1.0.0.1/32), action=(ip.ttl--; flags.loopback = 1; reg8[[0..15]] = 1; reg8[[16..31]] = select(1, 2);)
> > >> ])
> > >> AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | sed 's/192\.168\.0\..0/192.168.0.??/' | sed 's/table=../table=??/' | sort], [0], [dnl
> > >>   table=??(lr_in_ip_routing_ecmp), priority=100  , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;)
> > >> @@ -5199,3 +5199,71 @@ AT_CHECK([grep lr_in_gw_redirect lrflows | grep cr-DR | sed 's/table=../table=??
> > >>
> > >> AT_CLEANUP
> > >> ])
> > >> +
> > >> +
> > >> +OVN_FOR_EACH_NORTHD([
> > >> +AT_SETUP([route tables -- flows])
> > >> +AT_KEYWORDS([route-tables-flows])
> > >> +ovn_start
> > >> +
> > >> +check ovn-nbctl lr-add lr0
> > >> +check ovn-nbctl lrp-add lr0 lrp0 00:00:00:00:00:01 192.168.0.1/24
> > >> +check ovn-nbctl lrp-add lr0 lrp1 00:00:00:00:01:01 192.168.1.1/24
> > >> +check ovn-nbctl lrp-add lr0 lrp2 00:00:00:00:02:01 192.168.2.1/24
> > >> +check ovn-nbctl lrp-set-options lrp1 route_table=rtb-1
> > >> +check ovn-nbctl lrp-set-options lrp2 route_table=rtb-2
> > >> +
> > >> +check ovn-nbctl lr-route-add lr0 0.0.0.0/0 192.168.0.10
> > >> +check ovn-nbctl --route-table=rtb-1 lr-route-add lr0 192.168.0.0/24 192.168.1.10
> > >> +check ovn-nbctl --route-table=rtb-2 lr-route-add lr0 0.0.0.0/0 192.168.0.10
> > >> +check ovn-nbctl --route-table=rtb-2 lr-route-add lr0 1.1.1.1/32 192.168.0.20
> > >> +check ovn-nbctl --route-table=rtb-2 lr-route-add lr0 2.2.2.2/32 192.168.0.30
> > >> +check ovn-nbctl --route-table=rtb-2 --ecmp lr-route-add lr0 2.2.2.2/32 192.168.0.31
> > >> +check ovn-nbctl --wait=sb sync
> > >> +
> > >> +ovn-sbctl dump-flows lr0 > lr0flows
> > >> +AT_CAPTURE_FILE([lr0flows])
> > >> +
> > >> +AT_CHECK([grep -e "lr_in_ip_routing_pre.*match=(1)" lr0flows | sed 's/table=../table=??/'], [0], [dnl
> > >> +  table=??(lr_in_ip_routing_pre), priority=0    , match=(1), action=(next;)
> > >> +])
> > >> +
> > >> +p1_reg=$(grep -oP "lr_in_ip_routing_pre.*lrp1.*action=\(reg7 = \K." lr0flows)
> > >> +p2_reg=$(grep -oP "lr_in_ip_routing_pre.*lrp2.*action=\(reg7 = \K." lr0flows)
> > >> +echo $p1_reg
> > >> +echo $p2_reg
> > >> +
> > >> +# exact register values are not predictable
> > >> +if [[ $p1_reg -eq 2 ] && [ $p2_reg -eq 1 ]]; then
> > >> +  echo "swap reg values in dump"
> > >> +  sed -i -r s'/^(.*lrp2.*action=\(reg7 = )(1)(.*)/\12\3/g' lr0flows  # "reg7 = 1" -> "reg7 = 2"
> > >> +  sed -i -r s'/^(.*lrp1.*action=\(reg7 = )(2)(.*)/\11\3/g' lr0flows  # "reg7 = 2" -> "reg7 = 1"
> > >> +  sed -i -r s'/^(.*match=\(reg7 == )(2)( &&.*lrp1.*)/\11\3/g' lr0flows  # "reg7 == 2" -> "reg7 == 1"
> > >> +  sed -i -r s'/^(.*match=\(reg7 == )(1)( &&.*lrp0.*)/\12\3/g' lr0flows  # "reg7 == 1" -> "reg7 == 2"
> > >> +fi
> > >> +
> > >> +check test $p1_reg != $p2_reg -a $((p1_reg * p2_reg)) -eq 2
> > >> +
> > >> +AT_CHECK([grep "lr_in_ip_routing_pre" lr0flows | sed 's/table=../table=??/' | sort], [0], [dnl
> > >> +  table=??(lr_in_ip_routing_pre), priority=0    , match=(1), action=(next;)
> > >> +  table=??(lr_in_ip_routing_pre), priority=100  , match=(inport == "lrp1"), action=(reg7 = 1; next;)
> > >> +  table=??(lr_in_ip_routing_pre), priority=100  , match=(inport == "lrp2"), action=(reg7 = 2; next;)
> > >> +])
> > >> +
> > >> +grep -e "(lr_in_ip_routing   ).*outport" lr0flows
> > >> +
> > >> +AT_CHECK([grep -e "(lr_in_ip_routing   ).*outport" lr0flows | sed 's/table=../table=??/' | sort], [0], [dnl
> > >> +  table=??(lr_in_ip_routing   ), priority=1    , match=(reg7 == 2 && ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=101  , match=(ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=149  , match=(ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=149  , match=(ip4.dst == 192.168.1.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=149  , match=(ip4.dst == 192.168.2.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.2.1; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=229  , match=(inport == "lrp0" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=229  , match=(inport == "lrp1" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:101; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=229  , match=(inport == "lrp2" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:201; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=49   , match=(reg7 == 1 && ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.1.10; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;)
> > >> +  table=??(lr_in_ip_routing   ), priority=65   , match=(reg7 == 2 && ip4.dst == 1.1.1.1/32), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.20; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;)
> > >> +])
> > >> +
> > >> +AT_CLEANUP
> > >> +])
> > >> diff --git a/tests/ovn.at b/tests/ovn.at
> > >> index 8cd4edebe..48092c112 100644
> > >> --- a/tests/ovn.at
> > >> +++ b/tests/ovn.at
> > >> @@ -17905,7 +17905,7 @@ eth_dst=00000000ff01
> > >> ip_src=$(ip_to_hex 10 0 0 10)
> > >> ip_dst=$(ip_to_hex 172 168 0 101)
> > >> send_icmp_packet 1 1 $eth_src $eth_dst $ip_src $ip_dst c4c9 0000000000000000000000
> > >> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | awk '/table=25, n_packets=1, n_bytes=45/{print $7" "$8}'],[0],[dnl
> > >> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | awk '/table=26, n_packets=1, n_bytes=45/{print $7" "$8}'],[0],[dnl
> > >> priority=80,ip,reg15=0x3,metadata=0x3,nw_src=10.0.0.10 actions=drop
> > >> ])
> > >>
> > >> @@ -22089,6 +22089,430 @@ OVN_CLEANUP([hv1])
> > >> AT_CLEANUP
> > >> ])
> > >>
> > >> +
> > >> +OVN_FOR_EACH_NORTHD([
> > >> +AT_SETUP([route tables -- global routes])
> > >> +ovn_start
> > >> +
> > >> +# Logical network:
> > >> +# ls1 (192.168.1.0/24) - lrp-lr1-ls1 - lr1 - lrp-lr1-ls2 - ls2 (192.168.2.0/24)
> > >> +#
> > >> +# ls1 has lsp11 (192.168.1.11) and ls2 has lsp21 (192.168.2.21) and lsp22
> > >> +# (192.168.2.22)
> > >> +#
> > >> +# lrp-lr1-ls1 set options:route_table=rtb-1
> > >> +#
> > >> +# Static routes on lr1:
> > >> +# 0.0.0.0/0 nexthop 192.168.2.21
> > >> +# 1.1.1.1/32 nexthop 192.168.2.22 route_table=rtb-1
> > >> +#
> > >> +# Test 1:
> > >> +# lsp11 send packet to 2.2.2.2
> > >> +#
> > >> +# Expected result:
> > >> +# lsp21 should receive traffic, lsp22 should not receive any traffic
> > >> +#
> > >> +# Test 2:
> > >> +# lsp11 send packet to 1.1.1.1
> > >> +#
> > >> +# Expected result:
> > >> +# lsp21 should receive traffic, lsp22 should not receive any traffic
> > >> +
> > >> +ovn-nbctl lr-add lr1
> > >> +
> > >> +for i in 1 2; do
> > >> +    ovn-nbctl ls-add ls${i}
> > >> +    ovn-nbctl lrp-add lr1 lrp-lr1-ls${i} 00:00:00:01:0${i}:01 192.168.${i}.1/24
> > >> +    ovn-nbctl lsp-add ls${i} lsp-ls${i}-lr1 -- lsp-set-type lsp-ls${i}-lr1 router \
> > >> +        -- lsp-set-options lsp-ls${i}-lr1 router-port=lrp-lr1-ls${i} \
> > >> +        -- lsp-set-addresses lsp-ls${i}-lr1 router
> > >> +done
> > >> +
> > >> +# install static routes
> > >> +ovn-nbctl lr-route-add lr1 0.0.0.0/0 192.168.2.21
> > >> +ovn-nbctl --route-table=rtb-1 lr-route-add lr1 1.1.1.1/32 192.168.2.22
> > >> +
> > >> +# set lrp-lr1-ls1 route table
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls1 route_table=rtb-1
> > >> +
> > >> +# Create logical ports
> > >> +ovn-nbctl lsp-add ls1 lsp11 -- \
> > >> +    lsp-set-addresses lsp11 "f0:00:00:00:01:11 192.168.1.11"
> > >> +ovn-nbctl lsp-add ls2 lsp21 -- \
> > >> +    lsp-set-addresses lsp21 "f0:00:00:00:02:21 192.168.2.21"
> > >> +ovn-nbctl lsp-add ls2 lsp22 -- \
> > >> +    lsp-set-addresses lsp22 "f0:00:00:00:02:22 192.168.2.22"
> > >> +
> > >> +net_add n1
> > >> +sim_add hv1
> > >> +as hv1
> > >> +ovs-vsctl add-br br-phys
> > >> +ovn_attach n1 br-phys 192.168.0.1
> > >> +ovs-vsctl -- add-port br-int hv1-vif1 -- \
> > >> +    set interface hv1-vif1 external-ids:iface-id=lsp11 \
> > >> +    options:tx_pcap=hv1/vif1-tx.pcap \
> > >> +    options:rxq_pcap=hv1/vif1-rx.pcap \
> > >> +    ofport-request=1
> > >> +
> > >> +ovs-vsctl -- add-port br-int hv1-vif2 -- \
> > >> +    set interface hv1-vif2 external-ids:iface-id=lsp21 \
> > >> +    options:tx_pcap=hv1/vif2-tx.pcap \
> > >> +    options:rxq_pcap=hv1/vif2-rx.pcap \
> > >> +    ofport-request=2
> > >> +
> > >> +ovs-vsctl -- add-port br-int hv1-vif3 -- \
> > >> +    set interface hv1-vif3 external-ids:iface-id=lsp22 \
> > >> +    options:tx_pcap=hv1/vif3-tx.pcap \
> > >> +    options:rxq_pcap=hv1/vif3-rx.pcap \
> > >> +    ofport-request=3
> > >> +
> > >> +# wait for earlier changes to take effect
> > >> +AT_CHECK([ovn-nbctl --timeout=3 --wait=hv sync], [0], [ignore])
> > >> +
> > >> +for i in 1 2; do
> > >> +    packet="inport==\"lsp11\" && eth.src==f0:00:00:00:01:11 && eth.dst==00:00:00:01:01:01 &&
> > >> +            ip4 && ip.ttl==64 && ip4.src==192.168.1.11 && ip4.dst==$i.$i.$i.$i && icmp"
> > >> +    AT_CHECK([as hv1 ovs-appctl -t ovn-controller inject-pkt "$packet"])
> > >> +
> > >> +    # Assume all packets go to lsp21.
> > >> +    exp_packet="eth.src==00:00:00:01:02:01 && eth.dst==f0:00:00:00:02:21 &&
> > >> +            ip4 && ip.ttl==63 && ip4.src==192.168.1.11 && ip4.dst==$i.$i.$i.$i && icmp"
> > >> +    echo $exp_packet | ovstest test-ovn expr-to-packets >> expected_lsp21
> > >> +done
> > >> +> expected_lsp22
> > >> +
> > >> +# lsp21 should recieve 2 packets and lsp22 should recieve no packets
> > >> +OVS_WAIT_UNTIL([
> > >> +    rcv_n1=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > lsp21.packets && cat lsp21.packets | wc -l`
> > >> +    rcv_n2=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif3-tx.pcap > lsp22.packets && cat lsp22.packets | wc -l`
> > >> +    echo $rcv_n1 $rcv_n2
> > >> +    test $rcv_n1 -eq 2 -a $rcv_n2 -eq 0])
> > >> +
> > >> +for i in 1 2; do
> > >> +    sort expected_lsp2$i > expout
> > >> +    AT_CHECK([cat lsp2${i}.packets | sort], [0], [expout])
> > >> +done
> > >> +
> > >> +OVN_CLEANUP([hv1])
> > >> +AT_CLEANUP
> > >> +])
> > >> +
> > >> +
> > >> +OVN_FOR_EACH_NORTHD([
> > >> +AT_SETUP([route tables -- directly connected routes])
> > >> +ovn_start
> > >> +
> > >> +# Logical network:
> > >> +# ls1 (192.168.1.0/24) - lrp-lr1-ls1 - lr1 - lrp-lr1-ls2 - ls2 (192.168.2.0/24)
> > >> +#
> > >> +# ls1 has lsp11 (192.168.1.11) and ls2 has lsp21 (192.168.2.21)
> > >> +#
> > >> +# lrp-lr1-ls1 set options:route_table=rtb-1
> > >> +#
> > >> +# Static routes on lr1:
> > >> +# 192.168.2.0/25 nexthop 192.168.1.11 route_table=rtb-1
> > >> +#
> > >> +# Test 1:
> > >> +# lsp11 send packet to 192.168.2.21
> > >> +#
> > >> +# Expected result:
> > >> +# lsp21 should receive traffic, lsp11 should not receive any traffic
> > >> +
> > >> +ovn-nbctl lr-add lr1
> > >> +
> > >> +for i in 1 2; do
> > >> +    ovn-nbctl ls-add ls${i}
> > >> +    ovn-nbctl lrp-add lr1 lrp-lr1-ls${i} 00:00:00:01:0${i}:01 192.168.${i}.1/24
> > >> +    ovn-nbctl lsp-add ls${i} lsp-ls${i}-lr1 -- lsp-set-type lsp-ls${i}-lr1 router \
> > >> +        -- lsp-set-options lsp-ls${i}-lr1 router-port=lrp-lr1-ls${i} \
> > >> +        -- lsp-set-addresses lsp-ls${i}-lr1 router
> > >> +done
> > >> +
> > >> +# install static route, which overrides directly-connected routes
> > >> +ovn-nbctl --route-table=rtb-1 lr-route-add lr1 192.168.2.0/25 192.168.1.11
> > >> +
> > >> +# set lrp-lr1-ls1 route table
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls1 route_table=rtb-1
> > >> +
> > >> +# Create logical ports
> > >> +ovn-nbctl lsp-add ls1 lsp11 -- \
> > >> +    lsp-set-addresses lsp11 "f0:00:00:00:01:11 192.168.1.11"
> > >> +ovn-nbctl lsp-add ls2 lsp21 -- \
> > >> +    lsp-set-addresses lsp21 "f0:00:00:00:02:21 192.168.2.21"
> > >> +
> > >> +net_add n1
> > >> +sim_add hv1
> > >> +as hv1
> > >> +ovs-vsctl add-br br-phys
> > >> +ovn_attach n1 br-phys 192.168.0.1
> > >> +ovs-vsctl -- add-port br-int hv1-vif1 -- \
> > >> +    set interface hv1-vif1 external-ids:iface-id=lsp11 \
> > >> +    options:tx_pcap=hv1/vif1-tx.pcap \
> > >> +    options:rxq_pcap=hv1/vif1-rx.pcap \
> > >> +    ofport-request=1
> > >> +
> > >> +ovs-vsctl -- add-port br-int hv1-vif2 -- \
> > >> +    set interface hv1-vif2 external-ids:iface-id=lsp21 \
> > >> +    options:tx_pcap=hv1/vif2-tx.pcap \
> > >> +    options:rxq_pcap=hv1/vif2-rx.pcap \
> > >> +    ofport-request=2
> > >> +
> > >> +# wait for earlier changes to take effect
> > >> +AT_CHECK([ovn-nbctl --timeout=3 --wait=hv sync], [0], [ignore])
> > >> +
> > >> +packet="inport==\"lsp11\" && eth.src==f0:00:00:00:01:11 && eth.dst==00:00:00:01:01:01 &&
> > >> +        ip4 && ip.ttl==64 && ip4.src==192.168.1.11 && ip4.dst==192.168.2.21 && icmp"
> > >> +AT_CHECK([as hv1 ovs-appctl -t ovn-controller inject-pkt "$packet"])
> > >> +
> > >> +# Assume all packets go to lsp21.
> > >> +exp_packet="eth.src==00:00:00:01:02:01 && eth.dst==f0:00:00:00:02:21 &&
> > >> +        ip4 && ip.ttl==63 && ip4.src==192.168.1.11 && ip4.dst==192.168.2.21 && icmp"
> > >> +echo $exp_packet | ovstest test-ovn expr-to-packets >> expected_lsp21
> > >> +> expected_lsp11
> > >> +
> > >> +# lsp21 should recieve 1 icmp packet and lsp11 should recieve no packets
> > >> +OVS_WAIT_UNTIL([
> > >> +    rcv_n11=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif1-tx.pcap > lsp11.packets && cat lsp11.packets | wc -l`
> > >> +    rcv_n21=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif2-tx.pcap > lsp21.packets && cat lsp21.packets | wc -l`
> > >> +    echo $rcv_n11 $rcv_n21
> > >> +    test $rcv_n11 -eq 0 -a $rcv_n21 -eq 1])
> > >> +
> > >> +for i in 11 21; do
> > >> +    sort expected_lsp$i > expout
> > >> +    AT_CHECK([cat lsp${i}.packets | sort], [0], [expout])
> > >> +done
> > >> +
> > >> +OVN_CLEANUP([hv1])
> > >> +AT_CLEANUP
> > >> +])
> > >> +
> > >> +
> > >> +OVN_FOR_EACH_NORTHD([
> > >> +AT_SETUP([route tables -- overlapping subnets])
> > >> +ovn_start
> > >> +
> > >> +# Logical network:
> > >> +#
> > >> +# ls1 (192.168.1.0/24) - lrp-lr1-ls1 -\   /- lrp-lr1-ls2 - ls2 (192.168.2.0/24)
> > >> +#                                      lr1
> > >> +# ls3 (192.168.3.0/24) - lrp-lr1-ls3 -/   \- lrp-lr1-ls4 - ls4 (192.168.4.0/24)
> > >> +#
> > >> +# ls1 has lsp11 (192.168.1.11)
> > >> +# ls2 has lsp21 (192.168.2.21)
> > >> +# ls3 has lsp31 (192.168.3.31)
> > >> +# ls4 has lsp41 (192.168.4.41)
> > >> +#
> > >> +# lrp-lr1-ls1 set options:route_table=rtb-1
> > >> +# lrp-lr1-ls2 set options:route_table=rtb-2
> > >> +#
> > >> +# Static routes on lr1:
> > >> +# 10.0.0.0/24 nexthop 192.168.3.31 route_table=rtb-1
> > >> +# 10.0.0.0/24 nexthop 192.168.4.41 route_table=rtb-2
> > >> +#
> > >> +# Test 1:
> > >> +# lsp11 send packet to 10.0.0.1
> > >> +#
> > >> +# Expected result:
> > >> +# lsp31 should receive traffic, lsp41 should not receive any traffic
> > >> +#
> > >> +# Test 2:
> > >> +# lsp21 send packet to 10.0.0.1
> > >> +#
> > >> +# Expected result:
> > >> +# lsp41 should receive traffic, lsp31 should not receive any traffic
> > >> +
> > >> +ovn-nbctl lr-add lr1
> > >> +
> > >> +# Create logical topology
> > >> +for i in $(seq 1 4); do
> > >> +    ovn-nbctl ls-add ls${i}
> > >> +    ovn-nbctl lrp-add lr1 lrp-lr1-ls${i} 00:00:00:01:0${i}:01 192.168.${i}.1/24
> > >> +    ovn-nbctl lsp-add ls${i} lsp-ls${i}-lr1 -- lsp-set-type lsp-ls${i}-lr1 router \
> > >> +        -- lsp-set-options lsp-ls${i}-lr1 router-port=lrp-lr1-ls${i} \
> > >> +        -- lsp-set-addresses lsp-ls${i}-lr1 router
> > >> +    ovn-nbctl lsp-add ls$i lsp${i}1 -- \
> > >> +        lsp-set-addresses lsp${i}1 "f0:00:00:00:0${i}:1${i} 192.168.${i}.${i}1"
> > >> +done
> > >> +
> > >> +# install static routes
> > >> +ovn-nbctl --route-table=rtb-1 lr-route-add lr1 10.0.0.0/24 192.168.3.31
> > >> +ovn-nbctl --route-table=rtb-2 lr-route-add lr1 10.0.0.0/24 192.168.4.41
> > >> +
> > >> +# set lrp-lr1-ls{1,2} route tables
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls1 route_table=rtb-1
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls2 route_table=rtb-2
> > >> +
> > >> +net_add n1
> > >> +sim_add hv1
> > >> +as hv1
> > >> +ovs-vsctl add-br br-phys
> > >> +ovn_attach n1 br-phys 192.168.0.1
> > >> +
> > >> +for i in $(seq 1 4); do
> > >> +    ovs-vsctl -- add-port br-int hv1-vif${i} -- \
> > >> +        set interface hv1-vif${i} external-ids:iface-id=lsp${i}1 \
> > >> +        options:tx_pcap=hv1/vif${i}-tx.pcap \
> > >> +        options:rxq_pcap=hv1/vif${i}-rx.pcap \
> > >> +        ofport-request=${i}
> > >> +done
> > >> +
> > >> +# wait for earlier changes to take effect
> > >> +AT_CHECK([ovn-nbctl --timeout=3 --wait=hv sync], [0], [ignore])
> > >> +
> > >> +# lsp31 should recieve packet coming from lsp11
> > >> +# lsp41 should recieve packet coming from lsp21
> > >> +for i in $(seq 1 2); do
> > >> +    di=$(( i + 2))  # dst index
> > >> +    ri=$(( 5 - i))  # reverse index
> > >> +    packet="inport==\"lsp${i}1\" && eth.src==f0:00:00:00:0${i}:1${i} &&
> > >> +            eth.dst==00:00:00:01:0${i}:01 && ip4 && ip.ttl==64 &&
> > >> +            ip4.src==192.168.${i}.${i}1 && ip4.dst==10.0.0.1 && icmp"
> > >> +    AT_CHECK([as hv1 ovs-appctl -t ovn-controller inject-pkt "$packet"])
> > >> +
> > >> +    # Assume all packets go to lsp${di}1.
> > >> +    exp_packet="eth.src==00:00:00:01:0${di}:01 && eth.dst==f0:00:00:00:0${di}:1${di} &&
> > >> +            ip4 && ip.ttl==63 && ip4.src==192.168.${i}.${i}1 && ip4.dst==10.0.0.1 && icmp"
> > >> +    echo $exp_packet | ovstest test-ovn expr-to-packets >> expected_lsp${di}1
> > >> +    > expected_lsp${ri}1
> > >> +
> > >> +    OVS_WAIT_UNTIL([
> > >> +        rcv_n1=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif${di}-tx.pcap > lsp${di}1.packets && cat lsp${di}1.packets | wc -l`
> > >> +        rcv_n2=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif${ri}-tx.pcap > lsp${ri}1.packets && cat lsp${ri}1.packets | wc -l`
> > >> +        echo $rcv_n1 $rcv_n2
> > >> +        test $rcv_n1 -eq 1 -a $rcv_n2 -eq 0])
> > >> +
> > >> +    for j in "${di}1" "${ri}1"; do
> > >> +        sort expected_lsp${j} > expout
> > >> +        AT_CHECK([cat lsp${j}.packets | sort], [0], [expout])
> > >> +    done
> > >> +
> > >> +    # cleanup tx pcap files
> > >> +    for j in "${di}1" "${ri}1"; do
> > >> +        ovs-vsctl -- remove interface hv1-vif${di} options tx_pcap
> > >> +        > hv1/vif${di}-tx.pcap
> > >> +        ovs-vsctl -- set interface hv1-vif${di} external-ids:iface-id=lsp${di}1 \
> > >> +            options:tx_pcap=hv1/vif${di}-tx.pcap
> > >> +    done
> > >> +done
> > >> +
> > >> +OVN_CLEANUP([hv1])
> > >> +AT_CLEANUP
> > >> +])
> > >> +
> > >> +
> > >> +OVN_FOR_EACH_NORTHD([
> > >> +AT_SETUP([route tables IPv6 -- overlapping subnets])
> > >> +ovn_start
> > >> +
> > >> +# Logical network:
> > >> +#
> > >> +# ls1 (2001:db8:1::/64) - lrp-lr1-ls1 -\   /- lrp-lr1-ls2 - ls2 (2001:db8:2::/64)
> > >> +#                                       lr1
> > >> +# ls3 (2001:db8:3::/64) - lrp-lr1-ls3 -/   \- lrp-lr1-ls4 - ls4 (2001:db8:4::/64)
> > >> +#
> > >> +# ls1 has lsp11 (2001:db8:1::11)
> > >> +# ls2 has lsp21 (2001:db8:2::21)
> > >> +# ls3 has lsp31 (2001:db8:3::31)
> > >> +# ls4 has lsp41 (2001:db8:4::41)
> > >> +#
> > >> +# lrp-lr1-ls1 set options:route_table=rtb-1
> > >> +# lrp-lr1-ls2 set options:route_table=rtb-2
> > >> +#
> > >> +# Static routes on lr1:
> > >> +# 2001:db8:2000::/64 nexthop 2001:db8:3::31 route_table=rtb-1
> > >> +# 2001:db8:2000::/64 nexthop 2001:db8:3::41 route_table=rtb-2
> > >> +#
> > >> +# Test 1:
> > >> +# lsp11 send packet to 2001:db8:2000::1
> > >> +#
> > >> +# Expected result:
> > >> +# lsp31 should receive traffic, lsp41 should not receive any traffic
> > >> +#
> > >> +# Test 2:
> > >> +# lsp21 send packet to 2001:db8:2000::1
> > >> +#
> > >> +# Expected result:
> > >> +# lsp41 should receive traffic, lsp31 should not receive any traffic
> > >> +
> > >> +ovn-nbctl lr-add lr1
> > >> +
> > >> +# Create logical topology
> > >> +for i in $(seq 1 4); do
> > >> +    ovn-nbctl ls-add ls${i}
> > >> +    ovn-nbctl lrp-add lr1 lrp-lr1-ls${i} 00:00:00:01:0${i}:01 2001:db8:${i}::1/64
> > >> +    ovn-nbctl lsp-add ls${i} lsp-ls${i}-lr1 -- lsp-set-type lsp-ls${i}-lr1 router \
> > >> +        -- lsp-set-options lsp-ls${i}-lr1 router-port=lrp-lr1-ls${i} \
> > >> +        -- lsp-set-addresses lsp-ls${i}-lr1 router
> > >> +    ovn-nbctl lsp-add ls$i lsp${i}1 -- \
> > >> +        lsp-set-addresses lsp${i}1 "f0:00:00:00:0${i}:1${i} 2001:db8:${i}::${i}1"
> > >> +done
> > >> +
> > >> +# install static routes
> > >> +ovn-nbctl --route-table=rtb-1 lr-route-add lr1 2001:db8:2000::/64 2001:db8:3::31
> > >> +ovn-nbctl --route-table=rtb-2 lr-route-add lr1 2001:db8:2000::/64 2001:db8:4::41
> > >> +
> > >> +# set lrp-lr1-ls{1,2} route tables
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls1 route_table=rtb-1
> > >> +ovn-nbctl lrp-set-options lrp-lr1-ls2 route_table=rtb-2
> > >> +
> > >> +net_add n1
> > >> +sim_add hv1
> > >> +as hv1
> > >> +ovs-vsctl add-br br-phys
> > >> +ovn_attach n1 br-phys 192.168.0.1
> > >> +
> > >> +for i in $(seq 1 4); do
> > >> +    ovs-vsctl -- add-port br-int hv1-vif${i} -- \
> > >> +        set interface hv1-vif${i} external-ids:iface-id=lsp${i}1 \
> > >> +        options:tx_pcap=hv1/vif${i}-tx.pcap \
> > >> +        options:rxq_pcap=hv1/vif${i}-rx.pcap \
> > >> +        ofport-request=${i}
> > >> +done
> > >> +
> > >> +# wait for earlier changes to take effect
> > >> +AT_CHECK([ovn-nbctl --timeout=3 --wait=hv sync], [0], [ignore])
> > >> +
> > >> +# lsp31 should recieve packet coming from lsp11
> > >> +# lsp41 should recieve packet coming from lsp21
> > >> +for i in $(seq 1 2); do
> > >> +    di=$(( i + 2))  # dst index
> > >> +    ri=$(( 5 - i))  # reverse index
> > >> +    packet="inport==\"lsp${i}1\" && eth.src==f0:00:00:00:0${i}:1${i} &&
> > >> +            eth.dst==00:00:00:01:0${i}:01 && ip6 && ip.ttl==64 &&
> > >> +            ip6.src==2001:db8:${i}::${i}1 && ip6.dst==2001:db8:2000::1 && icmp6"
> > >> +    AT_CHECK([as hv1 ovs-appctl -t ovn-controller inject-pkt "$packet"])
> > >> +
> > >> +    # Assume all packets go to lsp${di}1.
> > >> +    exp_packet="eth.src==00:00:00:01:0${di}:01 && eth.dst==f0:00:00:00:0${di}:1${di} && ip6 &&
> > >> +                ip.ttl==63 && ip6.src==2001:db8:${i}::${i}1 && ip6.dst==2001:db8:2000::1 && icmp6"
> > >> +    echo $exp_packet | ovstest test-ovn expr-to-packets >> expected_lsp${di}1
> > >> +    > expected_lsp${ri}1
> > >> +
> > >> +    OVS_WAIT_UNTIL([
> > >> +        rcv_n1=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif${di}-tx.pcap > lsp${di}1.packets && cat lsp${di}1.packets | wc -l`
> > >> +        rcv_n2=`$PYTHON "$ovs_srcdir/utilities/ovs-pcap.in" hv1/vif${ri}-tx.pcap > lsp${ri}1.packets && cat lsp${ri}1.packets | wc -l`
> > >> +        echo $rcv_n1 $rcv_n2
> > >> +        test $rcv_n1 -eq 1 -a $rcv_n2 -eq 0])
> > >> +
> > >> +    for j in "${di}1" "${ri}1"; do
> > >> +        sort expected_lsp${j} > expout
> > >> +        AT_CHECK([cat lsp${j}.packets | sort], [0], [expout])
> > >> +    done
> > >> +
> > >> +    # cleanup tx pcap files
> > >> +    for j in "${di}1" "${ri}1"; do
> > >> +        ovs-vsctl -- remove interface hv1-vif${di} options tx_pcap
> > >> +        > hv1/vif${di}-tx.pcap
> > >> +        ovs-vsctl -- set interface hv1-vif${di} external-ids:iface-id=lsp${di}1 \
> > >> +            options:tx_pcap=hv1/vif${di}-tx.pcap
> > >> +    done
> > >> +done
> > >> +
> > >> +OVN_CLEANUP([hv1])
> > >> +AT_CLEANUP
> > >> +])
> > >> +
> > >> +
> > >> OVN_FOR_EACH_NORTHD([
> > >> AT_SETUP([forwarding group: 3 HVs, 1 LR, 2 LS])
> > >> AT_KEYWORDS([forwarding-group])
> > >> @@ -22844,7 +23268,7 @@ ovn-sbctl dump-flows > sbflows
> > >> AT_CAPTURE_FILE([sbflows])
> > >> AT_CAPTURE_FILE([offlows])
> > >> OVS_WAIT_UNTIL([
> > >> -    as hv1 ovs-ofctl dump-flows br-int table=20 > offlows
> > >> +    as hv1 ovs-ofctl dump-flows br-int table=21 > offlows
> > >>     test $(grep -c "load:0x64->NXM_NX_PKT_MARK" offlows) = 1 && \
> > >>     test $(grep -c "load:0x3->NXM_NX_PKT_MARK" offlows) = 1 && \
> > >>     test $(grep -c "load:0x4->NXM_NX_PKT_MARK" offlows) = 1 && \
> > >> @@ -22937,12 +23361,12 @@ send_ipv4_pkt hv1 hv1-vif1 505400000003 00000000ff01 \
> > >>     $(ip_to_hex 10 0 0 3) $(ip_to_hex 172 168 0 120)
> > >>
> > >> OVS_WAIT_UNTIL([
> > >> -    test 1 -eq $(as hv1 ovs-ofctl dump-flows br-int table=20 | \
> > >> +    test 1 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> > >>     grep "load:0x2->NXM_NX_PKT_MARK" -c)
> > >> ])
> > >>
> > >> AT_CHECK([
> > >> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=20 | \
> > >> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> > >>     grep "load:0x64->NXM_NX_PKT_MARK" -c)
> > >> ])
> > >>
> > >> @@ -23645,7 +24069,7 @@ AT_CHECK([
> > >>         grep "priority=100" | \
> > >>         grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[80..95\\]]))"
> > >>
> > >> -        grep table=22 hv${hv}flows | \
> > >> +        grep table=23 hv${hv}flows | \
> > >>         grep "priority=200" | \
> > >>         grep -c "actions=move:NXM_NX_CT_LABEL\\[[32..79\\]]->NXM_OF_ETH_DST\\[[\\]]"
> > >>     done; :], [0], [dnl
> > >> @@ -23770,7 +24194,7 @@ AT_CHECK([
> > >>         grep "priority=100" | \
> > >>         grep -c "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[80..95\\]]))"
> > >>
> > >> -        grep table=22 hv${hv}flows | \
> > >> +        grep table=23 hv${hv}flows | \
> > >>         grep "priority=200" | \
> > >>         grep -c "actions=move:NXM_NX_CT_LABEL\\[[32..79\\]]->NXM_OF_ETH_DST\\[[\\]]"
> > >>     done; :], [0], [dnl
> > >> @@ -24392,7 +24816,7 @@ AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep "actions=controller" | grep
> > >> ])
> > >>
> > >> # The packet should've been dropped in the lr_in_arp_resolve stage.
> > >> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep -E "table=22, n_packets=1,.* priority=1,ip,metadata=0x${sw_key},nw_dst=10.0.1.1 actions=drop" -c], [0], [dnl
> > >> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep -E "table=23, n_packets=1,.* priority=1,ip,metadata=0x${sw_key},nw_dst=10.0.1.1 actions=drop" -c], [0], [dnl
> > >> 1
> > >> ])
> > >>
> > >> --
> > >> 2.30.0
> > >>
> > >> _______________________________________________
> > >> dev mailing list
> > >> dev at openvswitch.org <mailto:dev at openvswitch.org>
> > >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev <https://mail.openvswitch.org/mailman/listinfo/ovs-dev>
> > >>
> > > _______________________________________________
> > > dev mailing list
> > > dev at openvswitch.org <mailto:dev at openvswitch.org>
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev <https://mail.openvswitch.org/mailman/listinfo/ovs-dev>
> > _______________________________________________
> > dev mailing list
> > dev at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list