[ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

alp.arslan at xflowresearch.com alp.arslan at xflowresearch.com
Thu May 3 13:42:46 UTC 2018


Enabling disabling EMC has no effect on this scenario. As far as I know
there is one EMC per PMD thread, so the interfaces have their own EMC's, the
bigger question is why does traffic on one interface effect the performance
of the other? Are they sharing anything? The only thing I can think of is
the datapath and the megaflow table, and I am looking for some way to do
that, is this doesn't work my only other option is to have 4 VMs with
pass-through interfaces and run OVS-DPDK inside VMs. 


-----Original Message-----
From: O'Reilly, Darragh [mailto:darragh.oreilly at hpe.com] 
Sent: Thursday, May 3, 2018 5:49 PM
To: alp.arslan at xflowresearch.com; discuss at openvswitch.org
Subject: RE: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

On Wed, May 02, 2018 at 10:02:04PM +0500, alp.arslan at xflowresearch.com
wrote:

> Can anyone explain this bizarre scenario of why the OVS is able to 
> forward more traffic over single interface polled by 6 vCPUs, compared 
> to 4 interfaces polled by 24 vCPUs.

Not really, but I would look at the cache stats: ovs-appctl
dpif-netdev/pmd-stats-show




More information about the discuss mailing list