[ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

Dincer Beken dbeken at blackned.de
Fri Feb 7 09:13:38 UTC 2020


Hello Ben, Pravin,

Thank you for your consideration.
> It is also simpler to fix the stats issue using this approach.

There's no stats issue.  Userspace just counts the number of packets it
sent and adds them in.

Regarding the stats issue. In case of LTE Broadcast, I have tight synchronization periods (from 80ms down to 10ms in 5G) in which I need to set the elapsed #octets and the packet number, as well as the timestamp into the header. Since I do not wan't to make an upcall with each packet, I am using the stats of the kernel flows. Since OVS_PACKET_CMD_EXECUTE does remove the flow directly after use, I am missing the stats of the first packet. Therefore I wanted to know, if there is a specific reason why OVS_PACKET_CMD_EXECUTE has to always use temporary kernel flows.

> >
> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > > Another option would be to add new command that install and execute
> > > packet in same netlink msg. That would save us a netlink msg to handle
> > > a miss-call.  what do you think about it?
> >
> > When I experimented with that in the past, I found that it didn't have a
> > noticeable impact on performance.
> >
> Reduced number of msgs sent here would help in certain situations.

That is plausible, but it didn't help when I measured it previously.

If we add a distinct message for packet execution with checking a flow, ex.: OVS_PACKET_CMD_EXECUTE_2, if there is is no flow found, should the packet be dropped? I assume that you probably would like the userplane (altough we are talking here about the a dpif-netlink adapter), to be loosely coupled from the kernel datapath state (such that the userplane does not always has to %100 know if a kernel flow has been purged or not). So  this would be an unnecessary risk. Well if we add the temporary flow creation, would not OVS_PACKET_CMD_EXECUTE be redundant?

If it did help, then I don't think we'd need a new command, we could
just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.
I guess that we only need to check in handle_upcalls, if the should_install_flow is positive, install the flow and append the packet, and only if not, create an DPIF_OP_EXECUTE netlink message. This looks helpful. I did not work with OVS_FLOW_CMD_SET yet, but this should be likewise I assume.

Regards,
Dincer

________________________________
Von: Ben Pfaff <blp at ovn.org>
Gesendet: Donnerstag, 6. Februar 2020 22:55
An: Pravin Shelar <pshelar at ovn.org>
Cc: Dincer Beken <dbeken at blackned.de>; ovs-dev at openvswitch.org <ovs-dev at openvswitch.org>; Andreas Eberlein <aeberlein at blackned.de>
Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:
> On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff <blp at ovn.org> wrote:
> >
> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > > Another option would be to add new command that install and execute
> > > packet in same netlink msg. That would save us a netlink msg to handle
> > > a miss-call.  what do you think about it?
> >
> > When I experimented with that in the past, I found that it didn't have a
> > noticeable impact on performance.
> >
> Reduced number of msgs sent here would help in certain situations.

That is plausible, but it didn't help when I measured it previously.

> It is also simpler to fix the stats issue using this approach.

There's no stats issue.  Userspace just counts the number of packets it
sent and adds them in.


More information about the dev mailing list