[ovs-dev] [PATCH] dpif-netdev: Fix non-pmd thread queue id.

Daniele Di Proietto diproiettod at vmware.com
Fri May 29 17:28:53 UTC 2015



On 29/05/2015 12:44, "Gray, Mark D" <mark.d.gray at intel.com> wrote:

>
>
>> -----Original Message-----
>> From: Daniele Di Proietto [mailto:diproiettod at vmware.com]
>> Sent: Thursday, May 28, 2015 5:58 PM
>> To: Gray, Mark D
>> Cc: dev at openvswitch.org
>> Subject: Re: [ovs-dev] [PATCH] dpif-netdev: Fix non-pmd thread queue id.
>> 
>> 
>> On 28/05/2015 17:16, "Gray, Mark D" <mark.d.gray at intel.com> wrote:
>> 
>> >>
>> >> Non pmd threads have a core_id == UINT32_MAX, while queue ids used
>> by
>> >>netdevs range from 0 to the number of CPUs.  Therefore core ids cannot
>> >>be  used directly to select a queue.
>> >>
>> >> This commit introduces a simple mapping to fix the problem: non pmd
>> >> threads use queue 0, pmd threads on core 0 to N use queues 1 to N+1
>> >>
>> >> Fixes: d5c199ea7ff7 ("netdev-dpdk: Properly support non pmd
>> >> threads.")
>> >>
>> >No comments on the code. However, I tested it by adding a veth port and
>> >sending a 'ping -I' through the other end of the veth and it segfaults.
>> 
>> Thanks for testing it.  From the backtrace it looks like I should also
>>update the
>> flushing logic.
>
>Yeah, netdev_dpdk_rxq_recv() doesn't flush correctly anymore which is
>causing this race condition. I just submitted an update to your patch
>that indicates where the problem is and resolves it.

That makes sense, thanks for the fix.

>
>However, I am seeing a performance drop with this (200kps). It's probably
>because of the extra overhead of doing the send due to core_id_to_qid().
>Maybe it could perform better if the nonpmd thread owned the last
>queue and then the core_id -> qid mapping would be one-to-one (except
>for the nonpmd case)?

This seems a good idea. To avoid the performance drop we could store
the queue_id into struct dp_netdev_pmd_thread with core_id.

I was thinking that maybe we should also avoid flushing in
netdev_dpdk_rxq_recv().  I'll post something soon.

>
>> 
>> How did you add the veth?  Did you use a pcap vdev?
>> 
>> Also, would you mind posting another backtrace with debug symbols?
>> It might help understand what is going on with the queues ids
>> 
>> Thanks,
>> 
>> Daniele
>> 
>> >
>> >(gdb) bt
>> >#0  0x0000000000526354 in ixgbe_xmit_pkts_vec ()
>> >#1  0x000000000066f473 in dpdk_queue_flush__ ()
>> >#2  0x000000000066fd16 in netdev_dpdk_rxq_recv ()
>> >#3  0x00000000005b9cd1 in netdev_rxq_recv ()
>> >#4  0x00000000005967e9 in dp_netdev_process_rxq_port ()
>> >#5  0x0000000000596f24 in pmd_thread_main ()
>> >#6  0x0000000000608041 in ovsthread_wrapper ()
>> >#7  0x0000003cc7607ee5 in start_thread () from /lib64/libpthread.so.0
>> >#8  0x0000003cc6ef4d1d in clone () from /lib64/libc.so.6
>> >
>> >I also didn¹t seen any perf drop with this patch in the normal dpdk
>> >phy-phy path.
>



More information about the dev mailing list