[ovs-dev] [PATCH V9 05/31] netdev: Adding a new netdev API to be used for offloading flows

Roi Dayan roid at mellanox.com
Mon Jun 12 12:04:44 UTC 2017



On 08/06/2017 20:08, Joe Stringer wrote:
> On 8 June 2017 at 05:05, Roi Dayan <roid at mellanox.com> wrote:
>>
>>
>> On 31/05/2017 04:37, Joe Stringer wrote:
>>>
>>> On 28 May 2017 at 04:59, Roi Dayan <roid at mellanox.com> wrote:
>>>>
>>>> From: Paul Blakey <paulb at mellanox.com>
>>>>
>>>> Add a new API interface for offloading dpif flows to netdev.
>>>> The API consist on the following:
>>>>   flow_put - offload a new flow
>>>>   flow_get - query an offloaded flow
>>>>   flow_del - delete an offloaded flow
>>>>   flow_flush - flush all offloaded flows
>>>>   flow_dump_* - dump all offloaded flows
>>>>
>>>> In upcoming commits we will introduce an implementation of this
>>>> API for netdev-linux.
>>>>
>>>> Signed-off-by: Paul Blakey <paulb at mellanox.com>
>>>> Reviewed-by: Roi Dayan <roid at mellanox.com>
>>>> Reviewed-by: Simon Horman <simon.horman at netronome.com>
>>>> ---
>>>
>>>
>>> <snip>
>>>
>>>> @@ -769,6 +777,67 @@ struct netdev_class {
>>>>
>>>>      /* Discards all packets waiting to be received from 'rx'. */
>>>>      int (*rxq_drain)(struct netdev_rxq *rx);
>>>> +
>>>> +    /* ## -------------------------------- ## */
>>>> +    /* ## netdev flow offloading functions ## */
>>>> +    /* ## -------------------------------- ## */
>>>> +
>>>> +    /* If a particular netdev class does not support offloading flows,
>>>> +     * all these function pointers must be NULL. */
>>>> +
>>>> +    /* Flush all offloaded flows from a netdev.
>>>> +     * Return 0 if successful, otherwise returns a positive errno value.
>>>> */
>>>> +    int (*flow_flush)(struct netdev *);
>>>> +
>>>> +    /* Flow dumping interface.
>>>> +     *
>>>> +     * This is the back-end for the flow dumping interface described in
>>>> +     * dpif.h.  Please read the comments there first, because this code
>>>> +     * closely follows it.
>>>> +     *
>>>> +     * 'flow_dump_create' is being executed in a dpif thread so there is
>>>> +     * no need for 'flow_dump_thread_create' implementation.
>>>
>>>
>>> I find this comment a bit confusing, but it's a good thing it was here
>>> because it raises a couple of discussion points.
>>>
>>> 'flow_dump_thread_create', perhaps poorly named, doesn't create a
>>> thread, but allocates memory for per-thread state so that each thread
>>> may dump safely in parallel while operating on an independent netlink
>>> dump and independent buffers. I guess that in the DPIF flow dump there
>>> is global dump state and per-thread state, while in this netdev flow
>>> dump API there is only global state?
>>>
>>> Describing that this interface doesn't need something that isn't being
>>> defined is a bit strange. If it's not needed, then we probably don't
>>> need to describe why it's not needed here since there's no such
>>> function. Then, the comment can be dropped.
>>>
>>>> +     * On success returns allocated netdev_flow_dump data, on failure
>>>> returns
>>>
>>>
>>> ^ returns allocated netdev_flow_dump_data "and returns 0"...?
>>>
>>>> +     * positive errno. */
>>>> +    int (*flow_dump_create)(struct netdev *, struct netdev_flow_dump
>>>> **dump);
>>>> +    int (*flow_dump_destroy)(struct netdev_flow_dump *);
>>>> +
>>>> +    /* Returns true while there are more flows to dump.
>>>
>>>
>>> s/while/if/
>>>
>>>> +     * rbuffer is used as a temporary buffer and needs to be pre
>>>> allocated
>>>> +     * by the caller. while there are more flows the same rbuffer should
>>>> +     * be provided. wbuffer is used to store dumped actions and needs to
>>>> be
>>>> +     * pre allocated by the caller. */
>>>
>>>
>>> I have a couple of extra questions which this description could be
>>> expanded to answer:
>>>
>>> Who is responsible for freeing 'rbuffer' and 'wbuffer'? I expect the
>>> caller, but this could be more explicit.
>>
>>
>> caller. as noted the function expects them to be pre allocated.
>
> Makes sense, but to be precise in the API documentation it should
> probably state that the caller is responsible for freeing those
> buffers.
>
>>> Are the pointers that are returned valid beyond the next call to
>>> flow_dump_next()?
>>
>>
>> yes. what can we add to make it clear?
>
> Hmm, ok. Usually when you make a call to the DPIF layer
> flow_dump_next, you provide a buffer which gets populated and the
> flows point within the buffer (round 1). Once you call flow_dump_next
> again after that (round 2), then the flows in round 1 are not
> guaranteed to be valid, because they point within the buffer that is
> getting manipulated by this function. The DPIF layer describes this
> limitation in its documentation, which implies that callers who wish
> to preserve the flow beyond the next (round 2) call to dump_next would
> have to make a copy. Is that the case here too? I guess that if there
> is not such a restriction, then I'm not sure if there's anything to
> describe.

right. it's the same limitation as wbuffer is the output buffer.
so it depends if the caller pass same address or a different address.


>
>> Hi Joe,
>>
>> I accidentally skipped your comments here for V10.
>> I'll address them in the next update.
>
> OK, thanks.
>
>> We skipped addressing port_hmap_obj as we also wanted to move it from
>> global to be per dpif which I think got me stuck somewhere in the
>> process. I don't remember the reason.
>> maybe we can still do as a first step changing this void* to some
>> type but still be global and later to be per dpif.
>> in any case can we address this in a later commit ?
>
> Sure, that sounds like a reasonable approach.
>


More information about the dev mailing list