[ovs-dev] [PATCH ovn] Stop setting Load_Balancer references in SB Datapath_Binding records.

Numan Siddique numans at ovn.org
Fri Jul 23 22:37:29 UTC 2021


On Fri, Jul 23, 2021 at 2:42 PM Mark Michelson <mmichels at redhat.com> wrote:
>
> Hi Dumitru,
>
>  From a functionality standpoint,
>
> Acked-by: Mark Michelson <mmichels at redhat.com>
>
> I have a question and suggestion below.
>

Hi Dumitru,

Thanks for addressing this issue.

I have a few comments.  Please see below.


> On 7/23/21 12:27 PM, Dumitru Ceara wrote:
> > Commit 6aab704fb67d ("controller: Avoid unnecessary load balancer flow
> > processing.") handled the case when load balancers are updated but
> > addition/removal of load balancers will trigger almost all logical
> > flows to be reprocessed by ovn-controller.
> >
> > The references are just used for optimizing lookups in ovn-controller
> > but we can avoid them all together.
> >
> > Signed-off-by: Dumitru Ceara <dceara at redhat.com>
> > ---
> > On a large scale deployment (simulating ovn-kubernetes) with 120 nodes,
> > and 16K load balancer VIPs associated to each node's logical switch,
> > when adding a new load balancer (just one VIP) inc_proc_eng debug logs
> > show:
> >
> > - without fix:
> > 2021-07-23T08:49:34.811Z|43363|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_logical_flow took 16824ms
> > 2021-07-23T08:49:34.813Z|43364|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_logical_dp_group took 0ms
> > 2021-07-23T08:49:44.018Z|43365|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_load_balancer took 9205ms
> >
> > - with fix:
> > 2021-07-23T16:24:12.284Z|17720|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_logical_flow took 163ms
> > 2021-07-23T16:24:12.284Z|17721|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_logical_dp_group took 0ms
> > 2021-07-23T16:24:12.285Z|17722|inc_proc_eng|DBG|node: logical_flow_output, handler for input SB_load_balancer took 2ms
> > ---
> >   controller/lflow.c          | 10 +++++-----
> >   controller/ovn-controller.c | 38 +++++++++++++++++++++++++++++++++++++
> >   controller/ovn-controller.h |  5 +++++
> >   lib/ovn-util.c              |  2 +-
> >   northd/ovn-northd.c         | 17 ++++-------------
> >   northd/ovn_northd.dl        | 14 ++++++++------
> >   ovn-sb.xml                  |  2 +-
> >   tests/ovn-northd.at         | 12 ++++++------
> >   8 files changed, 68 insertions(+), 32 deletions(-)
> >
> > diff --git a/controller/lflow.c b/controller/lflow.c
> > index 871d3c54d..b2976992d 100644
> > --- a/controller/lflow.c
> > +++ b/controller/lflow.c
> > @@ -1764,11 +1764,11 @@ lflow_processing_end:
> >
> >       /* Add load balancer hairpin flows if the datapath has any load balancers
> >        * associated. */
> > -    for (size_t i = 0; i < dp->n_load_balancers; i++) {
> > -        const struct sbrec_load_balancer *lb =
> > -            sbrec_load_balancer_table_get_for_uuid(l_ctx_in->lb_table,
> > -                                                   &dp->load_balancers[i]);
> > -        consider_lb_hairpin_flows(lb, l_ctx_in->local_datapaths,
> > +    struct local_datapath *ldp = get_local_datapath(l_ctx_in->local_datapaths,
> > +                                                    dp->tunnel_key);
> > +    for (size_t i = 0; i < ldp->n_load_balancers; i++) {
> > +        consider_lb_hairpin_flows(ldp->load_balancers[i],
> > +                                  l_ctx_in->local_datapaths,
> >                                     l_ctx_out->flow_table);
> >       }
> >
> > diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
> > index 3a9bdbc97..2cf467dff 100644
> > --- a/controller/ovn-controller.c
> > +++ b/controller/ovn-controller.c
> > @@ -2405,6 +2405,42 @@ lflow_output_port_groups_handler(struct engine_node *node, void *data)
> >       return _lflow_output_resource_ref_handler(node, data, REF_TYPE_PORTGROUP);
> >   }
> >
> > +static void
> > +init_lbs_per_datapath(struct ed_type_runtime_data *rt_data,
> > +                      const struct sbrec_load_balancer_table *lb_table)
> > +{
> > +    const struct sbrec_load_balancer *lb;
> > +    SBREC_LOAD_BALANCER_TABLE_FOR_EACH (lb, lb_table) {
> > +        for (size_t i = 0; i < lb->n_datapaths; i++) {
> > +            struct local_datapath *ldp =
> > +                get_local_datapath(&rt_data->local_datapaths,
> > +                                   lb->datapaths[i]->tunnel_key);
> > +            if (!ldp) {
> > +                continue;
> > +            }
> > +            if (ldp->n_load_balancers == ldp->n_allocated_load_balancers) {
> > +                ldp->load_balancers =
> > +                    x2nrealloc(ldp->load_balancers,
> > +                               &ldp->n_allocated_load_balancers,
> > +                               sizeof *ldp->load_balancers);
> > +            }
>
> Since "ldp->load_balancers" is never modified after this point, you
> could avoid some reallocations by pre-allocating ldp->load_balancers to
> the number of load balancers in the SB load balancer table. The tradeoff
> would be using more memory (in most cases). Would it be worth making
> that change?
>
> > +            ldp->load_balancers[ldp->n_load_balancers++] = lb;
> > +        }
> > +    }
> > +}
> > +
> > +static void
> > +cleanup_lbs_per_datapath(struct ed_type_runtime_data *rt_data)
> > +{
> > +    struct local_datapath *dp;
> > +    HMAP_FOR_EACH (dp, hmap_node, &rt_data->local_datapaths) {
> > +        free(dp->load_balancers);
> > +        dp->load_balancers = NULL;
> > +        dp->n_allocated_load_balancers = 0;
> > +        dp->n_load_balancers = 0;
> > +    }
> > +}
> > +
> >   static bool
> >   lflow_output_runtime_data_handler(struct engine_node *node,
> >                                     void *data OVS_UNUSED)
> > @@ -2430,6 +2466,7 @@ lflow_output_runtime_data_handler(struct engine_node *node,
> >       struct lflow_ctx_out l_ctx_out;
> >       struct ed_type_lflow_output *fo = data;
> >       init_lflow_ctx(node, rt_data, fo, &l_ctx_in, &l_ctx_out);
> > +    init_lbs_per_datapath(rt_data, l_ctx_in.lb_table);
> >

For every runtime data change we will build the 'lbs_per_datapath'.  I
think this is not necessary.

We use the ldp->load_balancers only in the function
lflow_add_flows_for_datapath() when a new datapath
is added to the "local_datapaths".

I'd suggest building the list of load balancers for the newly added
local datapath ? or perhaps run the
HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) {} to build the list
only if there is a new datapath added
and run the loop again ?

A couple of more suggestions:
1.  Since we are building and destroying the load balancer list,  I'd
suggest to store it in a separate hmap
     rather than using the "struct local_datapath".

2. Probably we can pass the "load balancers" and "n_load_balancers"
directly to the function lflow_add_flows_for_datapath()

Let me know what you think.

Thanks
Numan



> >       struct tracked_binding_datapath *tdp;
> >       HMAP_FOR_EACH (tdp, node, tracked_dp_bindings) {
> > @@ -2450,6 +2487,7 @@ lflow_output_runtime_data_handler(struct engine_node *node,
> >           }
> >       }
> >
> > +    cleanup_lbs_per_datapath(rt_data);
> >       engine_set_node_state(node, EN_UPDATED);
> >       return true;
> >   }
> > diff --git a/controller/ovn-controller.h b/controller/ovn-controller.h
> > index b864ed0fa..4e907e930 100644
> > --- a/controller/ovn-controller.h
> > +++ b/controller/ovn-controller.h
> > @@ -69,6 +69,11 @@ struct local_datapath {
> >       size_t n_allocated_peer_ports;
> >
> >       struct shash external_ports;
> > +
> > +    const struct sbrec_load_balancer **load_balancers;
> > +
> > +    size_t n_load_balancers;
> > +    size_t n_allocated_load_balancers;
> >   };
> >
> >   struct local_datapath *get_local_datapath(const struct hmap *,
> > diff --git a/lib/ovn-util.c b/lib/ovn-util.c
> > index 494d6d42d..3805923c8 100644
> > --- a/lib/ovn-util.c
> > +++ b/lib/ovn-util.c
> > @@ -760,7 +760,7 @@ ip_address_and_port_from_lb_key(const char *key, char **ip_address,
> >
> >   /* Increment this for any logical flow changes, if an existing OVN action is
> >    * modified or a stage is added to a logical pipeline. */
> > -#define OVN_INTERNAL_MINOR_VER 1
> > +#define OVN_INTERNAL_MINOR_VER 2
> >
> >   /* Returns the OVN version. The caller must free the returned value. */
> >   char *
> > diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> > index 1058c1c26..a0b946e71 100644
> > --- a/northd/ovn-northd.c
> > +++ b/northd/ovn-northd.c
> > @@ -3628,24 +3628,15 @@ build_ovn_lbs(struct northd_context *ctx, struct hmap *datapaths,
> >           free(lb_dps);
> >       }
> >
> > -    /* Set the list of associated load balanacers to a logical switch
> > -     * datapath binding in the SB DB. */
> > +    /* Datapath_Binding.load_balancers is not used anymore, it's still in the
> > +     * schema for compatibility reasons.  Reset it to empty, just in case.
> > +     */
> >       HMAP_FOR_EACH (od, key_node, datapaths) {
> >           if (!od->nbs) {
> >               continue;
> >           }
> >
> > -        struct uuid *lb_uuids =
> > -            xmalloc(od->nbs->n_load_balancer * sizeof *lb_uuids);
> > -        for (size_t i = 0; i < od->nbs->n_load_balancer; i++) {
> > -            const struct uuid *lb_uuid =
> > -                &od->nbs->load_balancer[i]->header_.uuid;
> > -            lb = ovn_northd_lb_find(lbs, lb_uuid);
> > -            lb_uuids[i] = lb->slb->header_.uuid;
> > -        }
> > -        sbrec_datapath_binding_set_load_balancers(od->sb, lb_uuids,
> > -                                                  od->nbs->n_load_balancer);
> > -        free(lb_uuids);
> > +        sbrec_datapath_binding_set_load_balancers(od->sb, NULL, 0);
> >       }
> >   }
> >
> > diff --git a/northd/ovn_northd.dl b/northd/ovn_northd.dl
> > index ab33a139e..64239d809 100644
> > --- a/northd/ovn_northd.dl
> > +++ b/northd/ovn_northd.dl
> > @@ -53,14 +53,12 @@ sb::Out_Meter(._uuid = hash128(name),
> >    * except tunnel id, which is allocated separately (see TunKeyAllocation). */
> >   relation OutProxy_Datapath_Binding (
> >       _uuid: uuid,
> > -    load_balancers: Set<uuid>,
> >       external_ids: Map<string,string>
> >   )
> >
> >   /* Datapath_Binding table */
> > -OutProxy_Datapath_Binding(uuid, load_balancers, external_ids) :-
> > +OutProxy_Datapath_Binding(uuid, external_ids) :-
> >       nb::Logical_Switch(._uuid = uuid, .name = name, .external_ids = ids,
> > -                       .load_balancer = load_balancers,
> >                          .other_config = other_config),
> >       var uuid_str = uuid2str(uuid),
> >       var external_ids = {
> > @@ -76,7 +74,7 @@ OutProxy_Datapath_Binding(uuid, load_balancers, external_ids) :-
> >           eids
> >       }.
> >
> > -OutProxy_Datapath_Binding(uuid, set_empty(), external_ids) :-
> > +OutProxy_Datapath_Binding(uuid, external_ids) :-
> >       lr in nb::Logical_Router(._uuid = uuid, .name = name, .external_ids = ids,
> >                                .options = options),
> >       lr.is_enabled(),
> > @@ -99,8 +97,12 @@ OutProxy_Datapath_Binding(uuid, set_empty(), external_ids) :-
> >       }.
> >
> >   sb::Out_Datapath_Binding(uuid, tunkey, load_balancers, external_ids) :-
> > -    OutProxy_Datapath_Binding(uuid, load_balancers, external_ids),
> > -    TunKeyAllocation(uuid, tunkey).
> > +    OutProxy_Datapath_Binding(uuid, external_ids),
> > +    TunKeyAllocation(uuid, tunkey),
> > +    /* Datapath_Binding.load_balancers is not used anymore, it's still in the
> > +     * schema for compatibility reasons.  Reset it to empty, just in case.
> > +     */
> > +    var load_balancers = set_empty().
> >
> >
> >   /* Proxy table for Out_Datapath_Binding: contains all Datapath_Binding fields,
> > diff --git a/ovn-sb.xml b/ovn-sb.xml
> > index a39778ee0..e57d4f7ec 100644
> > --- a/ovn-sb.xml
> > +++ b/ovn-sb.xml
> > @@ -2649,7 +2649,7 @@ tcp.flags = RST;
> >
> >       <column name="load_balancers">
> >         <p>
> > -        Load balancers associated with the datapath.
> > +        Not used anymore; kept for backwards compatibility of the schema.
> >         </p>
> >       </column>
> >
> > diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> > index 3fb4c8bc9..689f76737 100644
> > --- a/tests/ovn-northd.at
> > +++ b/tests/ovn-northd.at
> > @@ -2366,7 +2366,7 @@ echo
> >   echo "__file__:__line__: Check that SB lb0 has sw0 in datapaths column."
> >
> >   check_column "$sw0_sb_uuid" sb:load_balancer datapaths name=lb0
> > -check_column "$lb0_uuid" sb:datapath_binding load_balancers external_ids:name=sw0
> > +check_column "" sb:datapath_binding load_balancers external_ids:name=sw0
> >
> >   check ovn-nbctl --wait=sb set load_balancer . vips:"10.0.0.20\:90"="20.0.0.4:8080,30.0.0.4:8080"
> >
> > @@ -2390,7 +2390,7 @@ sw1_sb_uuid=$(fetch_column datapath_binding _uuid external_ids:name=sw1)
> >   echo
> >   echo "__file__:__line__: Check that SB lb0 has sw0 and sw1 in datapaths column."
> >   check_column "$sw0_sb_uuid $sw1_sb_uuid" sb:load_balancer datapaths name=lb0
> > -check_column "$lb0_uuid" sb:datapath_binding load_balancers external_ids:name=sw1
> > +check_column "" sb:datapath_binding load_balancers external_ids:name=sw1
> >
> >   check ovn-nbctl --wait=sb lb-add lb1 10.0.0.30:80 20.0.0.50:8080 udp
> >   check_row_count sb:load_balancer 1
> > @@ -2416,8 +2416,8 @@ echo "__file__:__line__: Check that SB lb1 has sw1 in datapaths column."
> >   check_column "$sw1_sb_uuid" sb:load_balancer datapaths name=lb1
> >
> >   echo
> > -echo "__file__:__line__: check that datapath sw1 has lb0 and lb1 set in the load_balancers column."
> > -check_column "$lb0_uuid $lb1_uuid" sb:datapath_binding load_balancers external_ids:name=sw1
> > +echo "__file__:__line__: check that datapath sw1 has no entry in the load_balancers column."
> > +check_column "" sb:datapath_binding load_balancers external_ids:name=sw1
> >
> >
> >   echo
> > @@ -2426,10 +2426,10 @@ check ovn-nbctl --wait=sb set Load_Balancer lb1 options:hairpin_snat_ip="42.42.4
> >   check_column "$lb1_uuid" sb:load_balancer _uuid name=lb1 options='{hairpin_orig_tuple="true", hairpin_snat_ip="42.42.42.42 4242::4242"}'
> >
> >   echo
> > -echo "__file__:__line__: Delete load balancer lb1 an check that datapath sw1's load_balancers are updated accordingly."
> > +echo "__file__:__line__: Delete load balancer lb1 an check that datapath sw1's load_balancers is still empty."
>
> If you're changing this line:
>
> s/an/and/
>
>
> >
> >   ovn-nbctl --wait=sb lb-del lb1
> > -check_column "$lb0_uuid" sb:datapath_binding load_balancers external_ids:name=sw1
> > +check_column "" sb:datapath_binding load_balancers external_ids:name=sw1
> >   AT_CLEANUP
> >   ])
> >
> >
>
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>


More information about the dev mailing list