[ovs-dev] How can we improve veth and tap performance in OVS DPDK?

Ilya Maximets i.maximets at samsung.com
Mon Jul 29 16:07:41 UTC 2019

> Hi, all
> We’re trying OVS DPDK in openstack cloud, but a big warn makes us hesitate.
> Floating IP and qrouter use tap interfaces which are attached into br-int,
> SNAT also should use similar way, so OVS DPDK will impact on VM network
> performance significantly, I believe many cloud providers have deployed OVS
> DPDK, my questions are:
> 1.       Do we have some known ways to improve this?

As RedHat OSP guide suggests, you could use any SDN controller (like OpenDayLight)
or, alternatively, you could use OVN as a network provider for OpenStack.
This way all the required functionality will be handled by the OpenFlow rules
inside OVS without necessity to send traffic over veths and taps to Linux Kernel.

> 2.       Is there any existing effort for this? Veth in kubernetes should
> have the same performance issue in OVS DPDK case.

It makes no sense right now to run OVS-DPDK on veth pairs in Kubernetes.
The only benefit from OVS-DPDK in K8s might be from using virtio-vhost-user
ports instead of veths for container networking. But this is not implemented.
Running DPDK apps inside K8s containers has a lot of unresolved issues right now.

One approach that could improve performance of veths and taps in the future is
using AF_XDP sockets which are supported in OVS now. But AF_XDP doesn't work
properly for virtual interfaces (veths, taps) yet due to issues in Linux Kernel.

> I also found a very weird issue. I added two veth pairs into ovs bridge and
> ovs DPDK bridge, for ovs case, iperf3 can work well, but it can’t for OVS
> DPDK case, what’s wrong.

This is exactly same issue as we already discussed previously. Disable tx offloading
on veth pairs and everything will work.

Best regards, Ilya Maximets.

More information about the dev mailing list