[ovs-discuss] DPDK bandwidth issue

Zhang Qiang dotslash.lu at gmail.com
Wed Oct 19 03:37:51 UTC 2016


Hi all,

I'm using ovs 2.5.90 built with dpdk 16.04-1 on CentOS 7.2(3.10.0-327).
Seems the network bandwidth drops severely with dpdk enabled, especially
with dpdkbond.

With the following setup, the bandwidth is only around 30Mbits/s:
> ovs-vsctl show
72b1bac3-0f7d-40c9-9b84-cabeff7f5521
Bridge "ovsbr0"
    Port dpdkbond
        Interface "dpdk1"
            type: dpdk
        Interface "dpdk0"
            type: dpdk
    Port "ovsbr0"
        tag: 112
        Interface "ovsbr0"
            type: internal
ovs_version: "2.5.90"

With the bond removed and by only using dpdk0, the bandwidth is around
850Mbits/s, still much lower than the performance of bare ovs which nearly
reaches the hardware limit of 1000Mbps.

There're lines in /var/log/openvswitch/ovs-vswtichd.log showing ovs using
100% CPU:
2016-10-19T11:21:19.304Z|00480|poll_loop|INFO|wakeup due to [POLLIN] on fd
64 (character device /dev/net/tun) at lib/netdev-linux.c:1132 (100% CPU
usage)

I understand that dpdk PMD threads use cores to poll, but is it normal for
the ovs-vswitchd process to use 100% of CPU? Is this relevant?

I've also tried to pin PMD threads to different cores other than
ovs-vswtichd's to eliminate possible impacts, didn't help.

What am I doing wrong? Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20161019/e8c501ce/attachment-0002.html>


More information about the discuss mailing list