[ovs-discuss] how many CPU cannot allocate for PMD thread?

Guoshuai Li ligs at dtdream.com
Mon Oct 16 11:26:43 UTC 2017


I can not answer your question, but I can share my environment:


I have 32 cpu:


[root at gateway1 ~]# cat /proc/cpuinfo | grep processor | wc -l
32
[root at gateway1 ~]#


I config my pmd-cpu-mask with 0xffffff00.

[root at gateway1 ~]# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", pmd-cpu-mask="0xffffff00"}


I config my dpdk port with "n_rxq=4", This configuration is important :

     Bridge br-ext
         Port bond-ext
             Interface "ext-dpdk-2"
                 type: dpdk
                 options: {dpdk-devargs="0000:84:00.1", n_rxq="4"}
             Interface "ext-dpdk-1"
                 type: dpdk
                 options: {dpdk-devargs="0000:84:00.0", n_rxq="4"}
     Bridge br-agg
         Port bond-agg
             Interface "agg-dpdk-2"
                 type: dpdk
                 options: {dpdk-devargs="0000:07:00.1", n_rxq="4"}
             Interface "agg-dpdk-1"
                 type: dpdk
                 options: {dpdk-devargs="0000:07:00.0", n_rxq="4"}

And then cpu 1600%


top - 19:24:27 up 18 days, 24 min,  6 users,  load average: 16.00, 
16.00, 16.00
Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
%Cpu(s): 50.0 us,  0.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
KiB Mem : 26409787+total, 25773403+free,  5427996 used,   935844 buff/cache
KiB Swap:  4194300 total,  4194300 free,        0 used. 25799068+avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ COMMAND
32426 openvsw+  10 -10 5772520 653044  14888 S  1599  0.2 2267:10 
ovs-vswitchd



[root at gateway1 ~]# top
top - 19:24:50 up 18 days, 25 min,  6 users,  load average: 16.00, 
16.00, 16.00
Tasks: 419 total,   1 running, 418 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu3  :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu8  :  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu9  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu10 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu11 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu12 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu13 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu14 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu15 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu16 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu17 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu18 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu19 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu20 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu21 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu22 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu23 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu24 :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu25 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu26 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu27 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu28 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu29 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu30 :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
%Cpu31 :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
KiB Mem : 26409787+total, 25773369+free,  5428244 used,   935924 buff/cache
KiB Swap:  4194300 total,  4194300 free,        0 used. 25799040+avail Mem




on 2017/10/16 16:07, BALL SUN write:
> sorry for late reply
>
> we have reinstall the OVS, but still having the same issue.
>
> we tried to set the pmd-cpu-mask=3, but only CPU1 is occupied.
> %Cpu0  :100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
> %Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
> %Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
> %Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
>
> #  /usr/local/bin/ovs-vsctl get Open_vSwitch . other_config
> {dpdk-init="true", pmd-cpu-mask="3"}
>
> # /usr/local/bin/ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 0:
> isolated : false
> port: dpdk0 queue-id: 0
> pmd thread numa_id 0 core_id 1:
> isolated : false
>
> is it because there is only one nodes available in numa?
>
> #  numactl -H
> available: 1 nodes (0)
> node 0 cpus: 0 1 2 3
> node 0 size: 8191 MB
> node 0 free: 2633 MB
> node distances:
> node   0
>    0:  10
>
>
>
>
>
>
>
> On Fri, Sep 22, 2017 at 9:16 PM, Flavio Leitner <fbl at sysclose.org> wrote:
>> On Fri, 22 Sep 2017 15:02:20 +0800
>> Sun Paul <paulrbk at gmail.com> wrote:
>>
>>> hi
>>>
>>> we have tried on that. e.g. if we set to 0x22, we still only able to
>>> see 2 cpu is in 100%, why?
>> Because that's what you told OVS to do.
>> The mask 0x22 is 0010 0010 and each '1' there represents a CPU.
>>
>> --
>> Flavio
>>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



More information about the discuss mailing list