[ovs-dev] [PATCH v2 1/8] dpif-netdev: associate flow with a mark id

Chandran, Sugesh sugesh.chandran at intel.com
Mon Sep 11 09:01:34 UTC 2017



Regards
_Sugesh


> -----Original Message-----
> From: Yuanhan Liu [mailto:yliu at fridaylinux.org]
> Sent: Monday, September 11, 2017 9:04 AM
> To: Chandran, Sugesh <sugesh.chandran at intel.com>
> Cc: dev at openvswitch.org
> Subject: Re: [ovs-dev] [PATCH v2 1/8] dpif-netdev: associate flow with a
> mark id
> 
> On Sun, Sep 10, 2017 at 04:14:01PM +0000, Chandran, Sugesh wrote:
> > > This patch associate a flow with a mark id (a uint32_t number) by CMAP.
> > >
> > > It re-uses the flow API (more precisely, the ->flow_put method) to
> > > setup the hw flow. The flow_put implementation then is supposed to
> > > create a flow with MARK action.
> > >
> > > Co-authored-by: Finn Christensen <fc at napatech.com>
> > > Signed-off-by: Yuanhan Liu <yliu at fridaylinux.org>
> > > Signed-off-by: Finn Christensen <fc at napatech.com>
> > > ---
> > >  lib/dpif-netdev.c | 85
> > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  lib/netdev.h      |  6 ++++
> > >  2 files changed, 91 insertions(+)
> >
> > [snip]
> > >       * Any thread trying to keep a rule from being freed should
> > > hold its own @@ -569,6 +575,8 @@ struct dp_netdev_pmd_thread {
> > >      struct ovs_mutex flow_mutex;
> > >      struct cmap flow_table OVS_GUARDED; /* Flow table. */
> > >
> > > +    struct cmap mark_to_flow;
> > [Sugesh] I would prefer to have this cmap per port than a pmd. The
> > reason being it offers better Performance due to less lookup. Anyway
> > the flow programming is on netdev, hence its good to keep it netdev
> centric than pmd. What do you think?
> 
> I think it might be workable.
[Sugesh] ok
> 
> > >      /* One classifier per in_port polled by the pmd */
> > >      struct cmap classifiers;
> > >      /* Periodically sort subtable vectors according to hit
> > > frequencies */ @@ -
> > > 1839,6 +1847,8 @@ dp_netdev_pmd_remove_flow(struct
> > > dp_netdev_pmd_thread *pmd,
> > >      OVS_REQUIRES(pmd->flow_mutex)
> > >  {
> > >      struct cmap_node *node = CONST_CAST(struct cmap_node *, &flow-
> > > >node);
> > > +    struct cmap_node *mark_node = CONST_CAST(struct cmap_node *,
> > > +                                             &flow->mark_node);
> > >      struct dpcls *cls;
> > >      odp_port_t in_port = flow->flow.in_port.odp_port;
> > >
> > > @@ -1846,6 +1856,10 @@ dp_netdev_pmd_remove_flow(struct
> > > dp_netdev_pmd_thread *pmd,
> > >      ovs_assert(cls != NULL);
> > >      dpcls_remove(cls, &flow->cr);
> > >      cmap_remove(&pmd->flow_table, node,
> dp_netdev_flow_hash(&flow-
> > > >ufid));
> > > +    if (flow->has_mark) {
> > [Sugesh] Any reason these are not mutex protected? As far as I know
> > the flow remove is handled in revalidator thread. Looks like this is a
> > critical section. So the install and delete should be protected.
> 
> I think you are right.
[Sugesh] ok. Thanks.
> 
> 
> [...]
> > > +    if (netdev_is_flow_api_enabled() &&
> > > +        dp_netdev_alloc_flow_mark(pmd, &info.flow_mark)) {
> > > +        struct dp_netdev_port *port;
> > > +        port = dp_netdev_lookup_port(pmd->dp, in_port);
> > > +        if (netdev_flow_put(port->netdev, match,
> > > +                            CONST_CAST(struct nlattr *, actions),
> > > +                            actions_len, ufid, &info, NULL) == 0) {
> > [Sugesh] Cost of flow install is a concern here. It is vary for device to device.
> > Have you done any analysis on the cost of rte_flow translate + hardware
> flow install?
> > It would be nice to mention it in the cover letter.
> 
> I haven't measured it yet: it's not my current focus. It's not in the datapath
> after all. Besides, it might not be a trivial task to optimize it.
[Sugesh] TBH I feel this is very important aspect to decide if it make any sense
to install the flow or not. What do you meant by not in datapath? The PMD is
blocked for all time until the flow install is complete.? For a millions of short lived flow
use case, this is very important to know.
Let me know if I have missed anything.


> 
> 	--yliu


More information about the dev mailing list