[ovs-dev] FW: [PATCH 3/4] dpif-netdev: Avoid port's reconfiguration on pmd-cpu-mask changes.

Ilya Maximets i.maximets at samsung.com
Fri May 26 10:25:57 UTC 2017



On 26.05.2017 12:49, Stokes, Ian wrote:
>> 2017-02-21 6:49 GMT-08:00 Ilya Maximets <i.maximets at samsung.com>:
>>> Reconfiguration of HW NICs may lead to packet drops.
>>> In current model all physical ports will be reconfigured each time 
>>> number of PMD threads changed. Since we not stopping threads on 
>>> pmd-cpu-mask changes, this patch will help to further decrease 
>>> port's downtime by setting the maximum possible number of wanted tx 
>>> queues to avoid unnecessary reconfigurations.
>>>
>>> Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
>>
>> I haven't thought this through a lot, but the last big series we 
>> pushed on master went exactly in the opposite direction, i.e. created 
>> one txq for each thread in the datapath.
>>
>> I thought this was a good idea because:
>>
>> * On some systems with hyperthreading we can have a lot of cpus (we 
>> received
>>    reports of systems with 72 cores). If you want to use only a couple 
>> of cores
>>    you're still forced to have a lot of unused txqs, which prevent you 
>> from having
>>    lockless tx.
>> * We thought that reconfiguring the number of pmds would not be a frequent
>>    operation.
>>
>> Why do you want to reconfigure the threads that often?  Is it to be 
>> able to support more throughput quickly?  In this case I think we 
>> shouldn't use the number of cpus, but something else coming from the 
>> user, so that the administrator can balance how quickly pmd threads 
>> can be reconfigured vs how many txqs should be on the system.
>> I'm not sure how to make this user friendly though.
>>
>> What do you think?
>>
>> Thanks,
>>
>> Daniele
> 
> Hi Ilya,
> 
> I would agree with Danieiles comments. There were issues in the past that when the number of txqs was set to the number of CPUs and hyperthreading was enabled the NIC HW would not support that many tx queues which led to trouble with DPDK device txq initialization.
> 
> Are the packet drops when changing PMD mask still an issue for you?
> 
> It's been a while since this feedback was given, I'd be interested if you intend to submit further work on this.
> 
> Ian

Hi Ian,

Thanks for your attention to this.
I understand your concerns about number of TX queues. I think, we can found
a compromise between exact and too high number of queues.

And yes, I'm still interested in these patches. Just had no much time to work on them.
I'll try to address all the comments and prepare new version in a near future.

I'll send more detailed answer for TXq issue in reply to Daniele's e-mail to keep
the discussion in one thread. (I guess you've lost right message-id while replying.
And I'll add you to CC of course)

Thanks again.
Best regards, Ilya Maximets.

>>
>>> ---
>>>  lib/dpif-netdev.c | 6 +++++-
>>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index
>>> 6e575ab..e2b4f39 100644
>>> --- a/lib/dpif-netdev.c
>>> +++ b/lib/dpif-netdev.c
>>> @@ -3324,7 +3324,11 @@ reconfigure_datapath(struct dp_netdev *dp)
>>>       * on the system and the user configuration. */
>>>      reconfigure_pmd_threads(dp);
>>>
>>> -    wanted_txqs = cmap_count(&dp->poll_threads);
>>> +    /* We need 1 Tx queue for each possible cpu core. */
>>> +    wanted_txqs = ovs_numa_get_n_cores();
>>> +    ovs_assert(wanted_txqs != OVS_CORE_UNSPEC);
>>> +    /* And 1 Tx queue for non-PMD threads. */
>>> +    wanted_txqs++;
>>>
>>>      /* The number of pmd threads might have changed, or a port can 
>>> be
>> new:
>>>       * adjust the txqs. */
>>> --
>>> 2.7.4
>>>
> 
> 
> 


More information about the dev mailing list