[ovs-discuss] OVS DPDK performance issue with inter-NUMA data paths

Onkar Pednekar onkar3006 at gmail.com
Tue Nov 27 19:00:02 UTC 2018


Hi all,

I am able to get expected performance using ovs dpdk on a single socket
system.
But on a system with 2 NUMA nodes, the throughput is less than expected.

The system has 8 physical cores each socket with hyperthreading enabled. So
total 32 cores.

Only one physical 10G interface is being used which after binding to dpdk
gets associated with socket 1.

The OVS passes the traffic from this interface to dpdkvhostuser interfaces
of 2 VMs, the VCPUS of each VM are pinned to physical cores from different
sockets.

So the traffic flows is as follows:
PHY <-> VM1 <-> PHY
PHY <-> VM2 <-> PHY

Since the only physical dpdk interface is associated with socket1, i see
that the pmd core on socket 1 is 100% utilized but no work is done by the
core of socket 2 where the other pmd thread is pinned. I know this is
expected since there are no dpdk interfaces associated with socket2. But
since I have VMs pinned to cores of socket 2, there is a cross node packet
transfer which I think is affecting the performance.

I wanted to know if there is any configuration or parameter that can help
optimized this inter-NUMA data path.

Thanks,
Onkar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181127/bd91caf1/attachment.html>


More information about the discuss mailing list