[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS

xu.binbin1 at zte.com.cn xu.binbin1 at zte.com.cn
Mon Apr 15 03:11:18 UTC 2019


I think the reason of lower throughput in the scenario of OVS-DPDK is that TSO(GSO)& GRO are not supported in OVS-DPDK. So the packets between the VMs


are limited to the MTU of the vhostuser ports. 






And the kernel based OVS supports TSO(GSO)&GRO, the TCP packets can be up to 64KB, so the throughput of iperf between two VMs is much higher.  























徐斌斌 xubinbin






软件开发工程师 Software Development
Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D Institute/Wireless Product Operation









南京市雨花台区花神大道6号中兴通讯 
4/F, R&D Building, No.6 Huashen Road, 
Yuhuatai District, Nanjing, P.R. China,
M: +86 13851437610
E: xu.binbin1 at zte.com.cn 
www.zte.com.cn










原始邮件



发件人:HarshGondaliya <harshgondaliya_vinodbhai at srmuniv.edu.in>
收件人:ovs-discuss <ovs-discuss at openvswitch.org>;
日 期 :2019年04月12日 15:34
主 题 :[ovs-discuss] OVS-DPDK giving lower throughput then Native OVS


_______________________________________________
discuss mailing list
discuss at openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



I had connected two VMs to native OVS bridge and I got iperf test result of around 35-37Gbps.Now when I am performing similar tests with two VMs connected to OVS-DPDK bridge using vhostuser ports I am getting the iperf test results as around 6-6.5 Gbps.
I am unable to understand the reason for such low throughput in case of OVS-DPDK. I am using OVS version 2.11.0


I have 4 physical cores on my CPU (i.e. 8 logical cores) and have 16 GB system. I have allocated 6GB for the hugepages pool. 2GB of it was given to OVS socket mem option and the remaining 4GB was given to Virtual machines for memory backing (2Gb per VM). These are some of the configurations of my OVS-DPDK bridge:


root at dpdk-OptiPlex-5040:/home/dpdk# ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
2019-04-12T07:01:00Z|00001|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 0
2019-04-12T07:01:00Z|00002|ovs_numa|INFO|Discovered 1 NUMA nodes and 8 CPU cores
2019-04-12T07:01:00Z|00003|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connecting...
2019-04-12T07:01:00Z|00004|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connected
2019-04-12T07:01:00Z|00005|dpdk|INFO|Using DPDK 18.11.0
2019-04-12T07:01:00Z|00006|dpdk|INFO|DPDK Enabled - initializing...
2019-04-12T07:01:00Z|00007|dpdk|INFO|No vhost-sock-dir provided - defaulting to /usr/local/var/run/openvswitch
2019-04-12T07:01:00Z|00008|dpdk|INFO|IOMMU support for vhost-user-client disabled.
2019-04-12T07:01:00Z|00009|dpdk|INFO|Per port memory for DPDK devices disabled.
2019-04-12T07:01:00Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xA --socket-mem 2048 --socket-limit 2048.
2019-04-12T07:01:00Z|00011|dpdk|INFO|EAL: Detected 8 lcore(s)
2019-04-12T07:01:00Z|00012|dpdk|INFO|EAL: Detected 1 NUMA nodes
2019-04-12T07:01:00Z|00013|dpdk|INFO|EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
2019-04-12T07:01:00Z|00014|dpdk|INFO|EAL: Probing VFIO support...
2019-04-12T07:01:00Z|00015|dpdk|INFO|EAL: PCI device 0000:00:1f.6 on NUMA socket -1
2019-04-12T07:01:00Z|00016|dpdk|WARN|EAL:   Invalid NUMA socket, default to 0
2019-04-12T07:01:00Z|00017|dpdk|INFO|EAL:   probe driver: 8086:15b8 net_e1000_em
2019-04-12T07:01:00Z|00018|dpdk|INFO|DPDK Enabled - initialized
2019-04-12T07:01:00Z|00019|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports recirculation
2019-04-12T07:01:00Z|00020|ofproto_dpif|INFO|netdev at ovs-netdev: VLAN header stack length probed as 1
2019-04-12T07:01:00Z|00021|ofproto_dpif|INFO|netdev at ovs-netdev: MPLS label stack length probed as 3
2019-04-12T07:01:00Z|00022|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports truncate action
2019-04-12T07:01:00Z|00023|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports unique flow ids
2019-04-12T07:01:00Z|00024|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports clone action
2019-04-12T07:01:00Z|00025|ofproto_dpif|INFO|netdev at ovs-netdev: Max sample nesting level probed as 10
2019-04-12T07:01:00Z|00026|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports eventmask in conntrack action
2019-04-12T07:01:00Z|00027|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_clear action
2019-04-12T07:01:00Z|00028|ofproto_dpif|INFO|netdev at ovs-netdev: Max dp_hash algorithm probed to be 1
2019-04-12T07:01:00Z|00029|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_state
2019-04-12T07:01:00Z|00030|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_zone
2019-04-12T07:01:00Z|00031|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_mark
2019-04-12T07:01:00Z|00032|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_label
2019-04-12T07:01:00Z|00033|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_state_nat
2019-04-12T07:01:00Z|00034|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_orig_tuple
2019-04-12T07:01:00Z|00035|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports ct_orig_tuple6
2019-04-12T07:01:00Z|00036|dpdk|INFO|VHOST_CONFIG: vhost-user server: socket created, fd: 48
2019-04-12T07:01:00Z|00037|netdev_dpdk|INFO|Socket /usr/local/var/run/openvswitch/vhost-user2 created for vhost-user port vhost-user2
2019-04-12T07:01:00Z|00038|dpdk|INFO|VHOST_CONFIG: bind to /usr/local/var/run/openvswitch/vhost-user2
2019-04-12T07:01:00Z|00039|netdev_dpdk|WARN|dpdkvhostuser ports are considered deprecated;  please migrate to dpdkvhostuserclient ports.
2019-04-12T07:01:00Z|00040|netdev|WARN|vhost-user2: arguments provided to device that is not configurable
2019-04-12T07:01:00Z|00041|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  0 created.
2019-04-12T07:01:00Z|00042|dpif_netdev|INFO|PMD thread on numa_id: 0, core id:  2 created.
2019-04-12T07:01:00Z|00043|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
2019-04-12T07:01:00Z|00044|dpif_netdev|INFO|Core 0 on numa node 0 assigned port 'vhost-user2' rx queue 0 (measured processing cycles 0).
2019-04-12T07:01:00Z|00045|bridge|INFO|bridge br0: added interface vhost-user2 on port 2
2019-04-12T07:01:00Z|00046|dpif_netdev|INFO|Core 0 on numa node 0 assigned port 'vhost-user2' rx queue 0 (measured processing cycles 0).
2019-04-12T07:01:00Z|00047|bridge|INFO|bridge br0: added interface br0 on port 65534
2019-04-12T07:01:00Z|00048|dpdk|INFO|VHOST_CONFIG: vhost-user server: socket created, fd: 60
2019-04-12T07:01:00Z|00049|netdev_dpdk|INFO|Socket /usr/local/var/run/openvswitch/vhost-user1 created for vhost-user port vhost-user1
2019-04-12T07:01:00Z|00050|dpdk|INFO|VHOST_CONFIG: bind to /usr/local/var/run/openvswitch/vhost-user1
2019-04-12T07:01:00Z|00051|netdev|WARN|vhost-user1: arguments provided to device that is not configurable
2019-04-12T07:01:00Z|00052|dpif_netdev|INFO|Core 0 on numa node 0 assigned port 'vhost-user2' rx queue 0 (measured processing cycles 0).
2019-04-12T07:01:00Z|00053|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'vhost-user1' rx queue 0 (measured processing cycles 0).
2019-04-12T07:01:00Z|00054|bridge|INFO|bridge br0: added interface vhost-user1 on port 1
2019-04-12T07:01:00Z|00055|bridge|INFO|bridge br0: using datapath ID 00009642775e6d45
2019-04-12T07:01:00Z|00056|connmgr|INFO|br0: added service controller "punix:/usr/local/var/run/openvswitch/br0.mgmt"
2019-04-12T07:01:00Z|00057|netdev|WARN|vhost-user2: arguments provided to device that is not configurable
2019-04-12T07:01:00Z|00058|netdev|WARN|vhost-user1: arguments provided to device that is not configurable



(1) Any direction where I may be going wrong which is resulting in such low throughput?

(2) How the specification of dpdk-lcore-mask and pmd-cpu-mask affects the performance of OVS-DPDK?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190415/fe61d2e0/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 24242e5637af428891c4db731e7765ad.jpg
Type: image/jpeg
Size: 2064 bytes
Desc: not available
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190415/fe61d2e0/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 9ae3e214c17d49ed935d87c674ba3ee2.jpg
Type: image/jpeg
Size: 6015 bytes
Desc: not available
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190415/fe61d2e0/attachment-0003.jpg>


More information about the discuss mailing list