[ovs-dev] [PATCH 7/8] netdev-dpdk: Configurable retries while enqueuing to vHost User ports.

Bodireddy, Bhanuprakash bhanuprakash.bodireddy at intel.com
Tue Jun 20 18:31:04 UTC 2017


>>On 06/07/2017 10:21 AM, Bhanuprakash Bodireddy wrote:
>>> This commit adds "vhost-enque-retry" where in the number of retries
>>> performed while enqueuing packets to vHostUser ports can be
>>> configured in ovsdb.
>>>
>>> Currently number of retries are set to '8' and a retry is performed
>>> when atleast some packets have been successfully sent on previous
>>attempt.
>>> While this approach works well, it causes throughput drop when
>>> multiple vHost User ports are servied by same PMD thread.
>>
>>Hi Bhanu,
>>
>>You are saying the approach works well but you are changing the default
>>behaviour. It would be good to explain a bit more about the negative
>>effects of changing the default and compare that against the positive
>>effects, so everyone gets a balanced view. If you have measurements
>>that would be even better.
>
>This issue was discussed earlier at different forums (OvS-DPDK day during
>2016 fall conference and community call) about the negative effect of retries
>on vHost User ports. Giving a bit of background for others interested in this
>problem:
>
>In OvS 2.5 Release:
>The retries on the vHost User ports were performed until a timeout(~100
>micro seconds)  is reached.
>The problem with that approach was If the guest is connected and isn't
>actively processing its queues, it could potentially impact the performance of
>neighboring guests (other vHost User ports) provided the same PMD thread is
>servicing them all.  It was reported by me and you indeed provided the fix in
>2.6
>
>In OvS 2.6 Release:
>Timeout logic is removed and retry logic is introduced. Here a maximum up to
>'8' retries can be performed provided atleast one packet is transmitted
>successfully in the previous attempt.
>
>Problem:
>Take the case where there are few VMs (with 3 vHost User ports each)
>serviced by same PMD thread. Some of the VMs are forwarding at high
>rates(using dpdk based app) and the remaining are slow VMs doing kernel
>forwarding in the guest. In this case the PMD would spend significant cycles
>for slower VMs and may end up doing maximum of 8 retries all the time.
>However, in some cases  doing a retry immediately isn't of much value as
>there may not be any free descriptors available.
>
>Also if there are more slow ports, the packets can potentially get tail dropped
>at the NIC as PMD is busy processing the packets and doing retries. I don't
>have numbers right now to back this problem but can do some tests next
>week to assess the impact with and without retries. Also adding jan here who
>wanted the retry logic to be configurable.

Hi Kevin,

I did some testing today with and without retries  and found  little performance improvement with retries turned off.
My test bench is pretty basic and not tuned for performance.
 - 2 PMD threads
 - 4 VMs with kernel based forwarding enabled in the guest.
 - VM running 3.x kernel / Qemu 2.5 / mrg_rxbuf=off
- 64 byte packets @ line rate with each VM receiving 25% of the traffic(3.7 mpps).

With retries enabled the aggregate throughput stands at 2.39Mpps in steady state,  whereas with retries turned off It is 2.42 Mpps. 

Regards,
Bhanuprakash.





More information about the dev mailing list