[ovs-dev] [PATCH] netdev-dpdk: Enable INDIRECT_DESC on DPDK vHostUser.

Kevin Traynor ktraynor at redhat.com
Tue Mar 21 10:58:17 UTC 2017


On 03/20/2017 11:19 AM, O Mahony, Billy wrote:
> Hi Kevin,
> 
> 
>> -----Original Message-----
>> From: Kevin Traynor [mailto:ktraynor at redhat.com]
>> Sent: Thursday, March 16, 2017 6:35 PM
>> To: O Mahony, Billy <billy.o.mahony at intel.com>; dev at openvswitch.org
>> Cc: Loftus, Ciara <ciara.loftus at intel.com>; Maxime Coquelin
>> <maxime.coquelin at redhat.com>
>> Subject: Re: [ovs-dev] [PATCH] netdev-dpdk: Enable INDIRECT_DESC on
>> DPDK vHostUser.
>>
>> On 03/01/2017 12:36 PM, Billy O'Mahony wrote:
>>> Hi All,
>>>
>>> I'm creating this patch on the basis of performance results outlined
>>> below. In summary it appears that enabling INDIRECT_DESC on DPDK
>>> vHostUser ports leads to very large increase in performance when using
>>> linux stack applications in the guest with no noticable performance
>>> drop for DPDK based applications in the guest.
>>>
>>> Test#1 (VM-VM iperf3 performance)
>>>  VMs use DPDK vhostuser ports
>>>  OVS bridge is configured for normal action.
>>>  OVS version 603381a (on 2.7.0 branch but before release,
>>>      also seen on v2.6.0 and v2.6.1)
>>>  DPDK v16.11
>>>  QEMU v2.5.0 (also seen with v2.7.1)
>>>
>>>  Results:
>>>   INDIRECT_DESC enabled    5.30 Gbit/s
>>>   INDIRECT_DESC disabled   0.05 Gbit/s
>>>
>>> Test#2  (Phy-VM-Phy RFC2544 Throughput)  DPDK PMDs are polling NIC,
>>> DPDK loopback app running in guest.
>>>  OVS bridge is configured with port forwarding to VM (via dpdkvhostuser
>> ports).
>>>  OVS version 603381a (on 2.7.0 branch but before release),
>>>      other versions not tested.
>>>  DPDK v16.11
>>>  QEMU v2.5.0 (also seen with v2.7.1)
>>>
>>>  Results:
>>>   INDIRECT_DESC enabled    2.75 Mpps @64B pkts (0.176 Gbit/s)
>>>   INDIRECT_DESC disabled   2.75 Mpps @64B pkts (0.176 Gbit/s)
>>>
>>
>> Hi Billy, I see a slight drop (3%) with indirect descriptors enabled and a raw
>> throughput test. Ciara previously reported 6%. Did you test this as well as the
>> 0% loss test?
> 
> [[BO'M]] 
> I didn't repeat the test but I did discuss with all of the team here before submitting the patch and we were happy to proceed.
> 
> The feeling was that the a raw throughput test (i.e. the 'fire hydrant' test where the packets are sent at line rate and the result is the number of packets forwarded regardless of loss rate) while quick to perform is not as relevant as a maximum lossless forwarding rate test.
>

0% loss is ultra sensitive to glitches/tuning and nobody knows what
tuning you have on your system. Raw throughput, or RFC2544 with a small
acceptable loss, is also useful as it allows for glitches/tuning
differences and may indicate if you are creating a bottleneck for
someone who's system is tuned differently. In this case though, I've run
it and it's only a small few % so it should be fine.

>> Kevin.
>>
>>>
>>> Billy O'Mahony (1):
>>>   netdev-dpdk: Enable INDIRECT_DESC on DPDK vHostUser.
>>>
>>>  lib/netdev-dpdk.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
> 



More information about the dev mailing list