[ovs-discuss] OVS-DPDK performance problem in Openstack Ocata

40724670 at qq.com 40724670 at qq.com
Thu Apr 20 07:34:51 UTC 2017


Hi,

I tested ovs-dpdk(compiled using ovs 2.6.1) under openstack ocata, and I found the performance is low.
Can someone give me some suggestions?

iperf3 througput test between 2 VMs on the same compute node gives bandwidth: 6.82 Gbps. Intel 82599ES 10-Gigabit SFI/SFP+ NICs with igb_uio are used.
====================================
Transfer          Bandwith          Retr
7.94 GBytes       6.82 Gbits/sec   17119      sender
7.94 GBytes       6.82 Gbits/sec              receiver

Envionment:
1 compute node with 2 VM(dpdk), iperf3 are running under the 2 VMs. 
compute node hardware: Huawei RH2288H V3, E5-2640 v3 at 2.6 GHZ*2, DDR3*16G*8


[root at EXTENV-10-254-9-7 ~]# top
top - 16:17:10 up 30 min,  5 users,  load average: 5.36, 5.53, 4.26
Tasks: 387 total,   1 running, 386 sleeping,   0 stopped,   0 zombie
%Cpu(s): 18.4 us,  0.3 sy,  0.0 ni, 81.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26357619+total,   346508 free, 26285744+used,   372236 buff/cache
KiB Swap: 13421772+total, 13421772+free,        0 used.   372340 avail Mem 

 3841 root      10 -10 8081120 366680  12928 S 402.0  0.1  76:02.16 ovs-vswitchd                                                                                                                   
 4080 root      20   0 1984008 233264  11880 S 100.0  0.1  10:50.82 qemu-kvm                                                                                                                       
 4068 root      20   0 1981960 266368  11880 S  94.7  0.1  10:23.92 qemu-kvm                                                                                                                       
  249 root      25   5       0      0      0 S   3.7  0.0   3:10.67 ksmd                                                                                                                           
 1094 root      20   0    4368    676    520 S   0.7  0.0   0:02.61 rngd                                                                                                                           
 3986 neutron   20   0  350596  85664   5180 S   0.3  0.0   0:05.27 neutron-op


[root at EXTENV-10-254-9-7 ~]# sudo top -p `pidof ovs-vswitchd` -H -d1
top - 16:31:18 up 44 min,  5 users,  load average: 4.12, 4.34, 4.39
Threads:  41 total,   4 running,  37 sleeping,   0 stopped,   0 zombie
%Cpu(s): 12.5 us,  0.3 sy,  0.0 ni, 87.1 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26357619+total,   347080 free, 26285494+used,   374160 buff/cache
KiB Swap: 13421772+total, 13421772+free,        0 used.   373876 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND                                                                                                                         
 3953 root      10 -10 8081120 366680  12928 R 99.9  0.1  32:54.06 pmd93                                                                                                                           
 3950 root      10 -10 8081120 366680  12928 R 99.9  0.1  32:54.07 pmd90                                                                                                                           
 3951 root      10 -10 8081120 366680  12928 R 99.9  0.1  32:54.07 pmd91                                                                                                                           
 3952 root      10 -10 8081120 366680  12928 R 99.9  0.1  32:54.07 pmd92                                                                                                                           
 3841 root      10 -10 8081120 366680  12928 S  0.0  0.1   1:03.48 ovs-vswitchd                                                                                                                    
 3842 root      10 -10 8081120 366680  12928 S  0.0  0.1   0:00.00 vfio-sync    
 

[root at EXTENV-10-254-9-7 ~]# ovs-appctl dpif-netdev/pmd-stats-show
pmd thread numa_id 0 core_id 6:
emc hits:555098657
megaflow hits:27
avg. subtable lookups per hit:1.19
miss:43
lost:0
polling cycles:2617584158048 (74.68%)
processing cycles:887414434068 (25.32%)
avg cycles per packet: 6314.19 (3504998592116/555098759)
avg processing cycles per packet: 1598.66 (887414434068/555098759)
pmd thread numa_id 0 core_id 7:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:2024072202084 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 22:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:2695473929852 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 23:
emc hits:68276661
megaflow hits:27
avg. subtable lookups per hit:1.15
miss:42
lost:0
polling cycles:3094829433060 (94.27%)
processing cycles:188263651948 (5.73%)
avg cycles per packet: 48085.07 (3283093085008/68276761)
avg processing cycles per packet: 2757.36 (188263651948/68276761)
main thread:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:65844260 (100.00%)
processing cycles:0 (0.00%)
[root at EXTENV-10-254-9-7 ~]# 


# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 6:
isolated : false
port: vhu6e13f9fe-a6 queue-id: 0
pmd thread numa_id 0 core_id 7:
isolated : false
port: vhu69e40c2f-d0 queue-id: 0
pmd thread numa_id 0 core_id 22:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 23:
isolated : false
port: vhu0161a0d7-ae queue-id: 0


# dpdk-devbind -s
Network devices using DPDK-compatible driver
============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe,vfio-pci
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe,vfio-pci


[root at EXTENV-10-254-9-7 ~]# uname -a
Linux EXTENV-10-254-9-7 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-327.el7.x86_64 root=/dev/mapper/bclinux-root ro crashkernel=auto rd.lvm.lv=bclinux/root rd.lvm.lv=bclinux/swap rhgb quiet default_hugepagesz=1GB hugepagesz=1G hugepages=200 isolcpus=2,3,26,27

# cat /proc/meminfo | grep Huge
AnonHugePages:    710656 kB
HugePages_Total:     248
HugePages_Free:      245
HugePages_Rsvd:        0
HugePages_Surp:        0


#  ovs-vsctl list Open_vSwitch
_uuid               : b000a321-12ea-47c6-9e96-3b76368b0336
bridges             : [5b784499-4af6-4354-92e1-e20712339bd3, a353f240-92f7-4f2d-b5e8-4aa32c6a5a24]
cur_cfg             : 894
datapath_types      : [netdev, system]
db_version          : "7.14.0"
external_ids        : {hostname="EXTENV-10-254-9-7", system-id="c4e79302-273a-4afe-b77c-397e383a3fa5"}
iface_types         : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, geneve, gre, internal, ipsec_gre, lisp, patch, stt, system, tap, vxlan]
manager_options     : [1d16f251-b844-44cd-a044-8cffbfd7ece2]
next_cfg            : 894
other_config        : {dpdk-alloc-mem="2048", dpdk-init="true", dpdk-socket-mem="1024", pmd-cpu-mask="c000c0"}
ovs_version         : "2.6.1-dpdk1"
ssl                 : []
statistics          : {}
system_type         : centos
system_version      : "7"


# cpu_layout.py 
============================================================
Core and Socket Information (as reported by '/proc/cpuinfo')
============================================================

cores =  [0, 1, 2, 3, 4, 5, 6, 7]
sockets =  [0, 1]

       Socket 0        Socket 1       
       --------        --------       
Core 0 [0, 16]         [8, 24]        
Core 1 [1, 17]         [9, 25]        
Core 2 [2, 18]         [10, 26]       
Core 3 [3, 19]         [11, 27]       
Core 4 [4, 20]         [12, 28]       
Core 5 [5, 21]         [13, 29]       
Core 6 [6, 22]         [14, 30]       
Core 7 [7, 23]         [15, 31] 


[root at EXTENV-10-254-9-7 ~]# ovs-vsctl show
b000a321-12ea-47c6-9e96-3b76368b0336
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-dpdk
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-dpdk
            tag: 787
            Interface br-dpdk
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:02:00.0"}
        Port phy-br-dpdk
            Interface phy-br-dpdk
                type: patch
                options: {peer=int-br-dpdk}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhu69e40c2f-d0"
            tag: 1
            Interface "vhu69e40c2f-d0"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/run/openvswitch/vhu69e40c2f-d0"}
        Port "vhu0161a0d7-ae"
            tag: 1
            Interface "vhu0161a0d7-ae"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/run/openvswitch/vhu0161a0d7-ae"}
        Port int-br-dpdk
            Interface int-br-dpdk
                type: patch
                options: {peer=phy-br-dpdk}
        Port br-int
            Interface br-int
                type: internal
        Port "vhu6e13f9fe-a6"
            tag: 1
            Interface "vhu6e13f9fe-a6"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/run/openvswitch/vhu6e13f9fe-a6"}
    ovs_version: "2.6.1-dpdk1"
[root at EXTENV-10-254-9-7 ~]# 



40724670 at qq.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20170420/077be94e/attachment-0001.html>


More information about the discuss mailing list