[ovs-discuss] [ovs-dev] OVS DPDK NUMA pmd assignment question for physical port

Darrell Ball dball at vmware.com
Wed Sep 6 02:47:07 UTC 2017


This same numa node limitation was already removed, although same numa is preferred for performance reasons.

commit c37813fdb030b4270d05ad61943754f67021a50d
Author: Billy O'Mahony <billy.o.mahony at intel.com>
Date:   Tue Aug 1 14:38:43 2017 -0700

    dpif-netdev: Assign ports to pmds on non-local numa node.
    
    Previously if there is no available (non-isolated) pmd on the numa node
    for a port then the port is not polled at all. This can result in a
    non-operational system until such time as nics are physically
    repositioned. It is preferable to operate with a pmd on the 'wrong' numa
    node albeit with lower performance. Local pmds are still chosen when
    available.
    
    Signed-off-by: Billy O'Mahony <billy.o.mahony at intel.com>
    Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
    Co-authored-by: Ilya Maximets <i.maximets at samsung.com>


The sentence “The rx queues are assigned to pmd threads on the same NUMA node in a round-robin fashion.”

under

DPDK Physical Port Rx Queues¶

should be removed since it is outdated in a couple of ways and there is other correct documentation on the same page
and also here http://docs.openvswitch.org/en/latest/howto/dpdk/

Maybe you could submit a patch ?

Thanks Darrell


On 9/5/17, 7:18 PM, "ovs-dev-bounces at openvswitch.org on behalf of 王志克" <ovs-dev-bounces at openvswitch.org on behalf of wangzhike at jd.com> wrote:

    Hi All,
    
    
    
    I read below doc about pmd assignment for physical port. I think the limitation “on the same NUMA node” may be not efficient.
    
    
    
    https://urldefense.proofpoint.com/v2/url?u=http-3A__docs.openvswitch.org_en_latest_intro_install_dpdk_&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=pqvCrQwfrcDxvwcpuouzVymiBkev1vHpnOlef-ZMev8&s=4wch_Q6fqo0stIDE4K2loh0z-dshuligqsrAV_h-QuU&e= 
    
    DPDK Physical Port Rx Queues¶<https://urldefense.proofpoint.com/v2/url?u=http-3A__docs.openvswitch.org_en_latest_intro_install_dpdk_-23dpdk-2Dphysical-2Dport-2Drx-2Dqueues&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=pqvCrQwfrcDxvwcpuouzVymiBkev1vHpnOlef-ZMev8&s=SexDthg-hfPaGjvjCRjkPPY1kK1NfycLQSDw6WHVArQ&e= >
    
    
    
    $ ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>
    
    
    
    The above command sets the number of rx queues for DPDK physical interface. The rx queues are assigned to pmd threads on the same NUMA node in a round-robin fashion.
    
    Consider below case:
    
    
    
    One host has one PCI NIC on NUMA node 0, and has 4 VMs, which spread in NUMA node 0 and 1. There are multiple rx queues configured on the physical NIC. We configured 4 pmd (two cpu from NUMA node0, and two cpu from node 1). Since the physical NIC locates on NUMA node0, only pmds on same NUMA node can poll its rxq. As a result, only two cpu can be used for polling physical NIC.
    
    
    
    If we compare the OVS kernel mode, there is no such limitation.
    
    
    
    So question:
    
    should we remove “same NUMA node” limitation fro physical port rx queues? Or we have other options to improve the performance for this case?
    
    
    
    Br,
    
    Wang Zhike
    
    
    
    _______________________________________________
    dev mailing list
    dev at openvswitch.org
    https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=pqvCrQwfrcDxvwcpuouzVymiBkev1vHpnOlef-ZMev8&s=Whz73vLTYWkBuEL6reD88bkzCgSfqpgb7MDiCG5fB4A&e= 
    



More information about the discuss mailing list