[ovs-dev] [PATCHv18] netdev-afxdp: add new netdev type for AF_XDP.

William Tu u9012063 at gmail.com
Fri Aug 23 17:08:42 UTC 2019


On Fri, Aug 23, 2019 at 9:59 AM Ilya Maximets <i.maximets at samsung.com> wrote:
>
> On 23.08.2019 19:08, William Tu wrote:
> > On Wed, Aug 21, 2019 at 2:31 AM Eelco Chaudron <echaudro at redhat.com> wrote:
> >>
> >>
> >>
> >>>>> William, Eelco, which HW NIC you're using? Which kernel driver?
> >>>>
> >>>> I’m using the below on the latest bpf-next driver:
> >>>>
> >>>> 01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> >>>> SFI/SFP+ Network Connection (rev 01)
> >>>> 01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> >>>> SFI/SFP+ Network Connection (rev 01)
> >>>
> >>> Thanks for information.
> >>> I found one suspicious place inside the ixgbe driver that could break
> >>> the completion queue ring and prepared a patch:
> >>>     https://protect2.fireeye.com/url?k=ac2418ed930ec67f.ac2593a2-94283087c2dd9833&u=https://patchwork.ozlabs.org/patch/1150244/
> >>>
> >>> It'll be good if you can test it.
> >>
> >> Hi Ilya, I was doping some testing of my own, and also concluded it was
> >> in the drivers' completion ring. I noticed after sending 512 packets the
> >> drivers TX counters kept increasing, which looks related to your fix.
> >>
> >> Will try it out, and sent results to the upstream mailing list…
> >>
> >> Thanks,
> >>
> >> Eelco
> >
> > Hi,
> >
> > I'm comparing the performance of netdev-afxdp.c on current master and
> > the DPDK's AF_XDP implementation in OVS dpdk-latest branch.
> > I'm using ixgbe and doing physical port to physical port forwarding, sending
> > 64 byte packets, with OpenFlow rule:
> >   ovs-ofctl  add-flow br0  "in_port=eth2, actions=output:eth3"
> >
> > In short
> > A. OVS's netdev-afxdp: 6.1Mpps
> > B. OVS-DPDK  AF_XDP pmd: 8Mpps
> > So I start to think about how to optimize lib/netdev-afxdp.c. Any comments are
> > welcomed! Below is the analysis:
>
> One major difference is that DPDK implementation supports XDP_USE_NEED_WAKEUP
> and it will be in use if you're building kernel from latest bpf-next tree.
> This allowes to significantly decrease number of syscalls.
> According to below perf stats, OVS implementation unlike dpdk one wastes ~11%
> of time inside the kernel and this could be fixed by need_wakeup feature.

Cool, thank you.
I will look at how to use XDP_USE_NEED_WAKEUP

>
> BTW, there are a lot of pmd threads in case A, but only one in case B.
> Was the test setup really equal?

Yes, they should be equal.
I accidentally in case A add a pmd-cpu-mask=0xf0
so it uses more cpus, but I always enable only one queue and pmd-stats-show
shows other pmd is doing nothing. Will fix it next time.

Regards,
William


More information about the dev mailing list