[ovs-dev] [PATCH v3 6/9] ofproto-dpif: Consume CT_Zone, and CT_Timeout_Policy tables

Darrell Ball dlu998 at gmail.com
Wed Aug 14 20:38:40 UTC 2019


On Wed, Aug 14, 2019 at 1:28 PM Yi-Hung Wei <yihung.wei at gmail.com> wrote:

> On Tue, Aug 13, 2019 at 7:46 PM Darrell Ball <dlu998 at gmail.com> wrote:
> >
> > Thanks for the patch
> >
> > Some high level comments:
> >
> > 1/ The ct_tp_kill_list code is still in common code
> >     I think we discussed moving that to the dpif backer code
> >     ct_timeout_policy_unref() is adding to this deferred kill list which
> is not needed for userspace
> >     datapath.
> > 2/ clear_existing_ct_timeout_policies() is in common code, but only does
> something if
> > ct_dpif_timeout_policy_dump_start/next/done are realized in the datatype
> type specific code
> > (which is only for the kernel code, which is correct). I think it would
> be cleaner and less confusing
> > just to make the API clear_existing_ct_timeout_policies() kernel
> specific; i.e. in dpif-netlink.
>
>
> Thanks for review. I address most of the code changes as in the
> detailed inline code review.
>
> For the two high level concerns,  it is mainly because currently
> ct_tp_kill_list is maintained in ofproto-dpif.c  I thought about to
> move the ct_tp_kill_list implementation from dpif_backer (in
> ofproto-dpif.c) to dpif-netlink.c, and here is why I still keep it in
> ofproto-dpif.c in the dpif_backer layer in this version.
>
> AFAIK, currently, we do not have a proper place to store
> ct_tp_kill_list in dpif-netlink.c in the userspace.  dpif-netlink is
> for the kernel datapath implementation, all the information that we
> configured to dpif-netlink are directly pass down into the kernel
> currently.   In userspace datapath, we can store userspace specific
> information in "struct dp_netdev", but there is no such place in
> dpif-neltink for now.  In this case, it is naturally to maintain the
> ct_tp_kill_list one level up in the dpif_backer layer.
>
> Anyhow, we can always make proper change on the way we maintain
> timeout policy in ofproto-dpif layer when the dpif-netdev
> implementation is introduced.
>

As discussed, lets defer.


>
>
> > On Mon, Aug 12, 2019 at 5:55 PM Yi-Hung Wei <yihung.wei at gmail.com>
> wrote:
> >> diff --git a/ofproto/ofproto-dpif.c b/ofproto/ofproto-dpif.c
> >> index 751535249e21..3013d83e96a0 100644
> >> --- a/ofproto/ofproto-dpif.c
> >> +++ b/ofproto/ofproto-dpif.c
> >> @@ -694,6 +718,8 @@ struct odp_garbage {
> >>
> >>  static void check_support(struct dpif_backer *backer);
> >>
> >> +#define MAX_TIMEOUT_POLICY_ID UINT32_MAX
> >
> >
> > seems like random placement; could be moved where it is used.
> >
>
> ok, I will move it right before ct_zone_config_init().
>
>
> >> +static struct ct_timeout_policy *
> >> +ct_timeout_policy_alloc__(void)
> >> +{
> >> +    struct ct_timeout_policy *ct_tp = xzalloc(sizeof *ct_tp);
> >> +    simap_init(&ct_tp->tp);
> >> +    return ct_tp;
> >> +}
> >
> >
> > by using above API, you are not saving any code and maybe more error
> prone
> >
>
> This function is used in ct_timeout_policy_alloc() and
> clear_existing_ct_timeout_policies(). So are you sugguesting to expand
> it in these two functions?
>

ohh. I missed the other usage; I don't feel strongly either way.


>
> >>
> >> +
> >> +static struct ct_timeout_policy *
> >> +ct_timeout_policy_alloc(struct simap *tp, struct id_pool *tp_ids)
> >> +{
> >> +    struct simap_node *node;
> >> +
> >> +    struct ct_timeout_policy *ct_tp = ct_timeout_policy_alloc__();
> >> +    SIMAP_FOR_EACH (node, tp) {
> >> +        simap_put(&ct_tp->tp, node->name, node->data);
> >> +    }
> >> +
> >> +    if (!id_pool_alloc_id(tp_ids, &ct_tp->tp_id)) {
> >> +        VLOG_ERR_RL(&rl, "failed to allocate timeout policy id.");
> >> +        simap_destroy(&ct_tp->tp);
> >> +        free(tp);
> >
> >
> > I think you rather need to free 'ct_tp'; i.e. free(ct_tp)
>
> Yes, thanks for spotting this bug.
>
>
> >> +static void
> >> +clear_existing_ct_timeout_policies(struct dpif_backer *backer)
> >> +{
> >> +    /* In kernel datapath, when OVS starts, there may be some
> pre-existing
> >> +     * timeout policies in the kernel.  To avoid reassign the same
> timeout
> >> +     * policy ids, we dump all the pre-existing timeout policies and
> keep
> >> +     * the ids in the pool.  Since OVS will not use those timeout
> policies
> >> +     * for new datapath flow, we add them to the kill list and remove
> >> +     * them later on. */
> >> +    void *state;
> >> +
> >> +    int err = ct_dpif_timeout_policy_dump_start(backer->dpif, &state);
> >> +    if (err) {
> >> +        return;
> >> +    }
> >
> >
> > can be
> >
> > + if (ct_dpif_timeout_policy_dump_start(backer->dpif, &state)) {
> > +        return;
> > +    }
> >
> > similarly below
>
> Sure, I will get rid of 'int err' in v4.
>
> >> +
> >> +    struct ct_dpif_timeout_policy cdtp;
> >> +    while (!(err = ct_dpif_timeout_policy_dump_next(backer->dpif,
> state,
> >> +                                                    &cdtp))) {
> >> +        struct ct_timeout_policy *ct_tp = ct_timeout_policy_alloc__();
> >> +        ct_tp->tp_id = cdtp.id;
> >> +        id_pool_add(backer->tp_ids, cdtp.id);
> >> +        ovs_list_insert(&backer->ct_tp_kill_list, &ct_tp->list_node);
> >
> >
> > not sure why you add to beginning here rather than end; was there some
> deeper meaning at play ?
>
> Not really, I am fine to add it on either side tho.
>

typically, we add to end then, which helps to eliminate questions and is
same as the
one other case you added.


>
> >> +static void
> >> +ct_del_zone_timeout_policy(const char *datapath_type, uint16_t zone)
> >> +{
> >> +    struct dpif_backer *backer;
> >> +    struct ct_zone *ct_zone;
> >> +
> >> +    backer = shash_find_data(&all_dpif_backers, datapath_type);
> >
> >
> > struct dpif_backer *backer = shash_find_data(&all_dpif_backers,
> datapath_type);
>
> Sure, will do in v4.
>
>
> >>
> >> +    if (!backer) {
> >> +        return;
> >> +    }
> >> +
> >> +    ct_zone = ct_zone_lookup(&backer->ct_zones, zone);
> >
> >
> > struct ct_zone *ct_zone = ct_zone_lookup(&backer->ct_zones, zone);
>
> Sure, added to v4.
>
>
> >> --- a/ofproto/ofproto.c
> >> +++ b/ofproto/ofproto.c
> >> @@ -935,6 +935,36 @@ ofproto_get_flow_restore_wait(void)
> >>      return flow_restore_wait;
> >>  }
> >>
> >> +/* Connection tracking configuration. */
> >> +void
> >> +ofproto_ct_set_zone_timeout_policy(const char *datapath_type, uint16_t
> zone,
> >> +                                   struct simap *timeout_policy)
> >> +{
> >> +    const struct ofproto_class *class;
> >> +
> >> +    datapath_type = ofproto_normalize_type(datapath_type);
> >> +    class = ofproto_class_find__(datapath_type);
> >
> >
> > const struct ofproto_class *class = ofproto_class_find__(datapath_type);
>
> Done in v4.
>
>
> >> +void
> >> +ofproto_ct_del_zone_timeout_policy(const char *datapath_type, uint16_t
> zone)
> >> +{
> >> +    const struct ofproto_class *class;
> >> +
> >> +    datapath_type = ofproto_normalize_type(datapath_type);
> >> +    class = ofproto_class_find__(datapath_type);
> >
> > const struct ofproto_class *class = ofproto_class_find__(datapath_type);
>
> Ack.
>
> >> --- a/vswitchd/bridge.c
> >> +++ b/vswitchd/bridge.c
> >> @@ -153,9 +153,35 @@ struct aa_mapping {
> >>      char *br_name;
> >>  };
> >>
> >> +/* Internal representation of conntrak zone configuration table in
> OVSDB. */
> >
> >
> > 'conntrack'
>
> Done.
>
> >> +static struct datapath *
> >> +datapath_create(const struct ovsrec_datapath *dp_cfg, const char *type)
> >> +{
> >> +    struct datapath *dp;
> >> +
> >> +    ovs_assert(!datapath_lookup(type));
> >
> > The caller just did a lookup before calling here; i.e you can remove
> assert.
>
> Sounds good.
>
> >>
> >> +    dp = xzalloc(sizeof *dp);
> > struct datapath *dp = xzalloc(sizeof *dp);
> >
> ok.
>
>
> >> +static void
> >> +datapath_destroy(struct datapath *dp)
> >> +{
> >> +    struct ct_zone *ct_zone;
> >> +
> >> +    if (dp) {
> >> +        HMAP_FOR_EACH (ct_zone, node, &dp->ct_zones) {
> >
> > HMAP_FOR_EACH_SAFE
> >
>
> I replace HMAP_FOR_EACH to HMAP_FOR_EACH_SAFE in v3 for the following
> 3 cases that you mentioned.
>
> Thanks,
>
> -Yi-Hung
>


More information about the dev mailing list