[ovs-dev] [PATCH] netdev-dpdk: Fix race condition with DPDK mempools in non pmd threads

Daniele Di Proietto ddiproietto at vmware.com
Thu Jul 17 21:39:52 UTC 2014


Sure, I’ll post v2 in a minute.

Thanks

On Jul 17, 2014, at 11:30 AM, Pravin Shelar <pshelar at nicira.com> wrote:

> On Mon, Jul 14, 2014 at 1:55 PM, Daniele Di Proietto
> <ddiproietto at vmware.com> wrote:
>> DPDK mempools rely on rte_lcore_id() to implement a thread-local cache.
>> Our non pmd threads had rte_lcore_id() == 0. This allowed concurrent access to
>> the "thread-local" cache, causing crashes.
>> 
>> This commit resolves the issue with the following changes:
>> 
>> - Every non pmd thread has the same lcore_id (0, for management reasons), which
>>  is not shared with any pmd thread (lcore_id for pmd threads now start from 1)
>> - DPDK mbufs must be allocated/freed in pmd threads. When there is the need to
>>  use mempools in non pmd threads, like in dpdk_do_tx_copy(), a mutex must be
>>  held.
>> - The previous change does not allow us anymore to pass DPDK mbufs to handler
>>  threads: therefore this commit partially revert 143859ec63d45e. Now packets
>>  are copied for upcall processing. We can remove the extra memcpy by
>>  processing upcalls in the pmd thread itself.
>> 
>> With the introduction of the extra locking, the packet throughput will be lower
>> in the following cases:
>> 
>> - When using internal (tap) devices with DPDK devices on the same datapath.
>>  Anyway, to support internal devices efficiently, we needed DPDK KNI devices,
>>  which will be proper pmd devices and will not need this locking.
>> - When packets are processed in the slow path by non pmd threads. This overhead
>>  can be avoided by handling the upcalls directly in pmd threads (a change that
>>  has already been proposed by Ryan Wilson)
>> 
>> Also, the following two fixes have been introduced:
>> - In dpdk_free_buf() use rte_pktmbuf_free_seg() instead of rte_mempool_put().
>>  This allows OVS to run properly with CONFIG_RTE_LIBRTE_MBUF_DEBUG DPDK option
>> - Do not bulk free mbufs in a transmission queue. They may belong to different
>>  mempools
>> 
> Can you refresh this patch against latest master, I am not able to apply it.
> 
> Thanks,
> Pravin




More information about the dev mailing list