[ovs-discuss] DPDK bandwidth issue

Chandran, Sugesh sugesh.chandran at intel.com
Fri Nov 18 11:07:21 UTC 2016



Regards
_Sugesh

From: discuss [mailto:discuss-bounces at openvswitch.org] On Behalf Of Chandran, Sugesh
Sent: Friday, October 21, 2016 3:17 PM
To: Tashi Lu <dotslash.lu at gmail.com>
Cc: discuss <discuss at openvswitch.org>
Subject: Re: [ovs-discuss] DPDK bandwidth issue



Regards
_Sugesh

From: Tashi Lu [mailto:dotslash.lu at gmail.com]
Sent: Thursday, October 20, 2016 5:11 AM
To: Chandran, Sugesh <sugesh.chandran at intel.com<mailto:sugesh.chandran at intel.com>>
Cc: geza.gemes at gmail.com<mailto:geza.gemes at gmail.com>; discuss <discuss at openvswitch.org<mailto:discuss at openvswitch.org>>
Subject: Re: [ovs-discuss] DPDK bandwidth issue

Thanks Sugesh, But would you please help me further with why dpdkbond affects bandwidth? With the bond the bandwidth is only around 30Mbps, configurations are shown in previous post.
[Sugesh] I will test this out and get back to you. I don’t have any performance numbers for bond ports at the moment.
[Sugesh]Sorry for the late reply,  Looks like this has something to do  with the test setup. We did some initial benchmarking on active-backup, balance-slb and balance-tcp on dpdk ports. They offers the expected performance numbers, except the balance-tcp has little bit low  on the numbers.
Can you please have a look at the flow rules in datapath by using
ovs-appctl dpctl/dump-flows?
Also can you see the datapath stats by
ovs-appctl dpctl/show -s


On 19 Oct 2016, at 3:46 PM, Chandran, Sugesh <sugesh.chandran at intel.com<mailto:sugesh.chandran at intel.com>> wrote:


Regards
_Sugesh

From: discuss [mailto:discuss-bounces at openvswitch.org] On Behalf Of Zhang Qiang
Sent: Wednesday, October 19, 2016 7:55 AM
To: geza.gemes at gmail.com<mailto:geza.gemes at gmail.com>
Cc: discuss <discuss at openvswitch.org<mailto:discuss at openvswitch.org>>
Subject: Re: [ovs-discuss] DPDK bandwidth issue

Geza,
Thanks for your insight.

- What is the packet size you see these bandwidth values?
A: I've tried various packet sizes with iperf, no significant differences.

- What endpoints do you use for traffic generation?
A: The bandwidth in question was measured from host to host, no VMs involved.

Your second question got me thinking, maybe it's normal for the host's network performance to drop when DPDK is deployed, because DPDK runs in the userspace which is a gain for userspace virtual machines but not for the host?
[Sugesh] Yes, The packet to the host network handled by ovs-vswitchd main thread , not the PMD, which implies low performance when compared to the ports managed by PMD

What about the bond problem? I've tried active-backup and balance-slb modes, and balance-tcp is not supported by the physical switch, none of them change the situation.

On 10/19/2016 06:04 AM, Geza Gemes <geza.gemes at gmail.com<mailto:geza.gemes at gmail.com>> wrote:
> On 10/19/2016 05:37 AM, Zhang Qiang wrote:
>> Hi all,
>>
>> I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS
>> 7.2(3.10.0-327). Seems the network bandwidth drops severely with dpdk
>> enabled, especially with dpdkbond.
>>
>> With the following setup, the bandwidth is only around 30Mbits/s:
>> > ovs-vsctl show
>> 72b1bac3-0f7d-40c9-9b84-cabeff7f5521
>> Bridge "ovsbr0"
>>     Port dpdkbond
>>         Interface "dpdk1"
>>             type: dpdk
>>         Interface "dpdk0"
>>             type: dpdk
>>     Port "ovsbr0"
>>         tag: 112
>>         Interface "ovsbr0"
>>             type: internal
>> ovs_version: "2.5.90"
>>
>> With the bond removed and by only using dpdk0, the bandwidth is around
>> 850Mbits/s, still much lower than the performance of bare ovs which
>> nearly reaches the hardware limit of 1000Mbps.
>>
>> There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs
>> using 100% CPU:
>> 2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN]
>> on fd 64 (character device /dev/net/tun) at lib/netdev-linux.c:1132
>> (100% CPU usage)
>>
>> I understand that dpdk PMD threads use cores to poll, but is it normal
>> for the ovs-vswitchd process to use 100% of CPU? Is this relevant?
>>
>> I've also tried to pin PMD threads to different cores other than
>> ovs-vswtichd's to eliminate possible impacts, didn't help.
>>
>> What am I doing wrong? Thanks.
>>
>>
>>
>> _______________________________________________
>> discuss mailing list
>> discuss at openvswitch.org<mailto:discuss at openvswitch.org>
>> http://openvswitch.org/mailman/listinfo/discuss
>
>Hi,
>
>A number of questions:
>
>- What is the packet size you see these bandwidth values?
>
>- What endpoints do you use for traffic generation? In order to benefit
>from DPDK you have to have the ports of your VM set up as dpdkvhostuser
>ports (and have them backed by hugepages). Otherwise the traffic will
>undergo additional userspace<->kernel copying.
>
>Using 100% CPU for the poll mode threads is the expected behavior. Also
>in order to achieve best performance please make sure, that no other
>processes will be scheduled to the cores allocated for DPDK.
>
>Cheers,
>
>Geza
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20161118/fe11153b/attachment-0001.html>


More information about the discuss mailing list