[ovs-discuss] There's no available (non-isolated) pmd thread on numa node 0, Expect reduced performance.

Alan Kayahan hsykay at gmail.com
Mon Apr 16 18:25:03 UTC 2018


Hi Ian,

>How are you starting the VM? QEMU or Libvirt?
It is a docker container. I pass the following EAL parameters to the app
that is running within

echo $EAL_PARAMS
-l 1 --master-lcore 1 -n 1 -m 1024 --file-prefix=nf1 --no-pci
--vdev=virtio_user2,path=/var/run/openvswitch/NF1-v0,mac=00:00:92:00:00:03
--vdev=virtio_user3,path=/var/run/openvswitch/NF1-v1,mac=00:00:92:00:00:04

>to ensure that the vhost port used memory from the same socket as the core
it’s PMD is running on I had to compile DPDK with
CONFIG_RTE_LIBRTE_VHOST_NUMA=y.
Wow thanks. This could even be the answer for the troubles I am having in
my dpdk app. Much appreciated!

Regards,
Alan





2018-04-16 19:38 GMT+02:00 Stokes, Ian <ian.stokes at intel.com>:

> Hi Alan,
>
>
>
> How are you starting the VM? QEMU or Libvirt?
>
>
>
> The dpdk vhost ports are associated with the numa node the virtqueue memory has been allocated on initially. So if running the VM you may want to use taskset –c with QEMU to allocate cores associated with numa node 1 to run the VM. If using libvirt try to ensure vcupin corresponds to numa 1 cores also in the xml.
>
>
>
> In my testing to ensure that the vhost port used memory from the same socket as the core it’s PMD is running on I had to compile DPDK with CONFIG_RTE_LIBRTE_VHOST_NUMA=y. This will avoid the warning altogether regardless if memory is allocated to both sockets.
>
>
>
> If you’re interested in how to test this there is a blog for using vhost numa aware that could be of use:
>
>
>
> https://software.intel.com/en-us/articles/vhost-user-numa-awareness-in-open-vswitch-with-dpdk
>
>
>
> Hope this helps.
>
> Ian
>
>
>
> *From:* Alan Kayahan [mailto:hsykay at gmail.com]
> *Sent:* Friday, April 13, 2018 8:05 AM
> *To:* Stokes, Ian <ian.stokes at intel.com>
> *Cc:* ovs-discuss at openvswitch.org
> *Subject:* Re: [ovs-discuss] There's no available (non-isolated) pmd
> thread on numa node 0, Expect reduced performance.
>
>
>
> Hi Ian,
>
>
>
> > As you are setting all lcore and pmd core to node 1 why are you giving
> 1024 memory to node 0?
>
> > When processing packets for this port it means cpu is accessing data
> across the numa nodes which causes a performance penalty
>
> I am benchmarking performance in different settings and trying to
> understand the roles of OVS and DPDK in mediating core affinity of pmds and
> hugepage utilization. Your answer helps a lot!
>
>
>
> > try using ‘other_config:dpdk-socket-mem=0,4096’ and see if you still
> see the issue.
>
> But this warning should appear regardless the socket-mem allocation
> right? If my understanding is correct, when OVS pmd's are pinned to 10-19
> and the VM tespmd app is pinned to core 2; the OVSpmd thread running on
> node 1 is having to access a huge-page on node 0 which VMtestpmd happens to
> access as well.
>
>
>
> Thanks,
>
> Alan
>
>
>
>
>
> 2018-04-12 18:28 GMT+02:00 Stokes, Ian <ian.stokes at intel.com>:
>
> Hi,
>
>
>
> I was able to reproduce the issue on my system.
>
>
>
> As you are setting all lcore and pmd core to node 1 why are you giving
> 1024 memory to node 0?
>
>
>
> I saw the same issue on my system but the warning did not appear once
> memory was allocated to node 1 only.
>
>
>
> I would think the VM being launched is using memory for the vhost port
> from node 0, however the queue for the vhost port is assigned to core 14
> which is on node1. When processing packets for this port it means cpu is
> accessing data across the numa nodes which causes a performance penalty,
> hence the warning.
>
>
>
> To avoid you should ensure all memory and cores operate on the same node
> where possible, try using ‘other_config:dpdk-socket-mem=0,4096’ and see
> if you still see the issue.
>
>
>
> Thanks
>
> Ian
>
>
>
> *From:* ovs-discuss-bounces at openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *Alan Kayahan
> *Sent:* Thursday, April 12, 2018 2:27 AM
> *To:* ovs-discuss at openvswitch.org
> *Subject:* [ovs-discuss] There's no available (non-isolated) pmd thread
> on numa node 0, Expect reduced performance.
>
>
>
> Hello,
>
>
>
> On the following setup, where all cores but 0 are isolated,
>
>
>
> available: 2 nodes (0-1)
>
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9
>
> node 1 cpus: 10 11 12 13 14 15 16 17 18 19
>
>
>
> I am trying to start OVS entirely on numa node 1 as following
>
>
>
> ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> other_config:dpdk-lcore-mask=0x00400 other_config:pmd-cpu-mask=0xffc00
> other_config:dpdk-socket-mem=1024,4096
>
>
>
> However when I create a vhost port SRC to attach a VNF(via virtio) on node
> 0, I get the following
>
>
>
> dpif_netdev|WARN|There's no available (non-isolated) pmd thread on numa
> node 0. Queue 0 on port 'SRC' will be assigned to the pmd on core 14 (numa
> node 1). Expect reduced performance.
>
>
>
> Any ideas?
>
>
>
> Thanks
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180416/7b95565d/attachment-0001.html>


More information about the discuss mailing list