[ovs-dev] What's the problem of high cpu usage of pmd thread in ovs-dpdk version 2.13
Simon Jones
batmanustc at gmail.com
Sat Oct 9 05:22:35 UTC 2021
Hi all,
I'm using ovs-dpdk version 2.13 in openstack(openstack is deploy by
kolla-ansible(https://docs.openstack.org/kolla-ansible/latest/)).
After I reboot hipervisor, I found high cpu usage of pmd thread in
ovs-dpdk. But there is no large log or large packets, how to know what's
the problem?
I want to use perf like this:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/ovs-dpdk_end_to_end_troubleshooting_guide/troubleshoot_ovs_dpdk_pmd_cpu_usage_with_perf_and_collect_and_send_the_troubleshooting_data
But it's hang there.
So I want to know, if it's my config error?
Like there should be some other config in Open_vSwitch.other_config of
ovsdb?
Thank you!!!
```
No more log:
[root at host01 ~]# tailf /var/log/kolla/openvswitch/ovs
ovsdb-server.log ovs-vswitchd.log
[root at host01 ~]# tailf /var/log/kolla/openvswitch/ovs-vswitchd.log
2021-10-09T02:41:16.175Z|00351|dpif_netdev|WARN|There's no available
(non-isolated) pmd thread on numa node 0. Queue 0 on port 'enp5s0f0' will
be assigned to the pmd on core 1 (numa node 1). Expect reduced performance.
2021-10-09T02:41:16.177Z|00352|bridge|INFO|bridge br_mgmt: added interface
enp5s0f1.1001 on port 1
2021-10-09T02:41:16.181Z|00353|bridge|INFO|bridge br_mgmt: using datapath
ID 0000043f72a49981
2021-10-09T02:41:16.181Z|00354|rconn|INFO|br_mgmt<->tcp:127.0.0.1:6633:
disconnecting
2021-10-09T02:41:16.187Z|00355|netdev_linux|WARN|error sending Ethernet
packet on enp5s0f1.1001: Network is down
2021-10-09T02:41:17.210Z|00356|rconn|INFO|br_mgmt<->tcp:127.0.0.1:6633:
connecting...
2021-10-09T02:41:18.035Z|00357|rconn|INFO|br_mgmt<->tcp:127.0.0.1:6633:
connected
^C
no packet on this NIC:
[root at host01 ~]# tcpdump -i enp5s0f0 -c 100
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp5s0f0, link-type EN10MB (Ethernet), capture size 262144
bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
top result:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23387 root 20 0 256.9g 467624 24984 S 96.1 0.7 882:43.45
ovs-vswitchd
top -H -p 23387 result:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23444 root 20 0 256.9g 467624 24984 R 95.7 0.7 883:35.70
pmd-c01/id:7
(ovsdpdk-db)[root at host01 openvswitch-2.13.4]# ovs-vsctl show | grep -C 4
enp5s0f0
Port phy-br_data
Interface phy-br_data
type: patch
options: {peer=int-br_data}
Port enp5s0f0 (This is dpdk type
netdev, which is PMD by pmd-c01)
Interface enp5s0f0
type: dpdk
options: {dpdk-devargs="0000:05:00.0"}
Port br_data
Interface br_data
(ovsdpdk-db)[root at host01 openvswitch-2.13.4]# ovs-vsctl list Open_vSwitch
_uuid : 427f600b-7a06-46e9-b273-5e63e08b1c72
bridges : [606f95bb-0637-4df9-809a-6b85f618ac89,
ce70bf43-ebad-4540-a7e3-e22a6e82fc4d, eae3413a-c859-44b3-a0dc-3b11c1f60a77,
fa0eeef2-c15f-4f49-8ad6-72deb0d2b120]
cur_cfg : 90
datapath_types : [netdev, system]
datapaths : {}
db_version : []
dpdk_initialized : true
dpdk_version : "DPDK 19.11.8"
external_ids : {}
iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient,
erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system,
tap, vxlan]
manager_options : [65e211d5-a792-4f28-9fab-4eb42034acaf]
next_cfg : 90
other_config : {dpdk-extra=" --proc-type primary ",
dpdk-hugepage-dir="/dev/hugepages", dpdk-init=True, dpdk-lcore-mask="0x1",
dpdk-mem-channels="4", dpdk-socket-mem="1024", pmd-cpu-mask="0x2"}
ovs_version : []
ssl : []
statistics : {}
system_type : []
system_version : []
```
----
Simon Jones
More information about the dev
mailing list