[ovs-discuss] ovs-dpdk performance is not good

Traynor, Kevin kevin.traynor at intel.com
Wed Jul 15 16:39:57 UTC 2015



From: gowrishankar [mailto:gowrishankar.m at linux.vnet.ibm.com]
Sent: Wednesday, July 15, 2015 8:38 AM
To: Traynor, Kevin
Cc: Na Zhu; bugs at openvswitch.org
Subject: Re: [ovs-discuss] ovs-dpdk performance is not good

Hi Kevin,

On Tuesday 14 July 2015 05:55 PM, Traynor, Kevin wrote:

[kt] I would check your core affinitization to ensure that the vswitchd
pmd is on a separate core to the vCPUs (set with other_config:pmd-cpu-mask).

I hope you mean host CPUs. Is that right ?

[kt] yes, sorry for the confusion



Also, this test is not using the DPDK vitrio PMD in the guest which provides
performance gains.

In the above topology, VM1-VM2 communication happens within same host.
So, would that still require dpdk virtio pmd ?

[kt] it’s not a requirement to use it, but it should give better performance

But, VM1 has to also communicate
with VM3 (in remote host). How can we address flow rules related to VM1 for both
the cases ?

[kt] you would have different rules for both flows

I'm also trying to understand if dpdk virtio pmd can support overlay tunnel ?
Currently, in test, I made use of ovs integration bridge to connect vhost-user netdev
and vxlan port, and then through external ovs bridge connecting dpdk0 port.  Would
that not be required incase of virtio pmd driver case ?

[kt] The virtio pmd is just a faster transport mechanism to userspace in the
guest using virtio, which in OVS with DPDK context can be used to connect the
guest to dpdkvhostcuse/user ports. Any overlay setup would be independent of it.


Appreciating your help to more understand on this perf problem.

Regards,
Gowrishankar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20150715/42928f9e/attachment-0002.html>


More information about the discuss mailing list