[ovs-dev] [PATCH] netdev-dpdk: Enable INDIRECT_DESC on DPDK vHostUser.

Maxime Coquelin maxime.coquelin at redhat.com
Fri Mar 17 09:52:18 UTC 2017



On 03/17/2017 10:48 AM, Maxime Coquelin wrote:
> Hi Billy,
>
> On 03/01/2017 01:36 PM, Billy O'Mahony wrote:
>> Hi All,
>>
>> I'm creating this patch on the basis of performance results outlined
>> below. In
>> summary it appears that enabling INDIRECT_DESC on DPDK vHostUser ports
>> leads to
>> very large increase in performance when using linux stack applications
>> in the
>> guest with no noticable performance drop for DPDK based applications
>> in the
>> guest.
>>
>> Test#1 (VM-VM iperf3 performance)
>>  VMs use DPDK vhostuser ports
>>  OVS bridge is configured for normal action.
>>  OVS version 603381a (on 2.7.0 branch but before release,
>>      also seen on v2.6.0 and v2.6.1)
>>  DPDK v16.11
>>  QEMU v2.5.0 (also seen with v2.7.1)
>>
>>  Results:
>>   INDIRECT_DESC enabled    5.30 Gbit/s
>>   INDIRECT_DESC disabled   0.05 Gbit/s
> This is indeed a big gain.
> However, isn't there a problem when indirect descriptors are disabled?
> 0.05 Gbits/s is very low, no?
>
> Could you share the iperf3 command line you used?

And is Rx mergeable feature enabled in this setup?

>
>> Test#2  (Phy-VM-Phy RFC2544 Throughput)
>>  DPDK PMDs are polling NIC, DPDK loopback app running in guest.
>>  OVS bridge is configured with port forwarding to VM (via
>> dpdkvhostuser ports).
>>  OVS version 603381a (on 2.7.0 branch but before release),
>>      other versions not tested.
>>  DPDK v16.11
>>  QEMU v2.5.0 (also seen with v2.7.1)
>>
>>  Results:
>>   INDIRECT_DESC enabled    2.75 Mpps @64B pkts (0.176 Gbit/s)
>>   INDIRECT_DESC disabled   2.75 Mpps @64B pkts (0.176 Gbit/s)
>
> Is this with 0% packet loss?
>
> Regards,
> Maxime


More information about the dev mailing list