[ovs-dev] 答复: 答复: How can we improve veth and tap performance in OVS DPDK?

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Wed Jul 31 08:28:33 UTC 2019


Got it, thanks Ilya.

-----邮件原件-----
发件人: Ilya Maximets [mailto:i.maximets at samsung.com] 
发送时间: 2019年7月31日 15:50
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>; ovs-dev at openvswitch.org
主题: Re: 答复: [ovs-dev] How can we improve veth and tap performance in OVS DPDK?

On 31.07.2019 3:44, Yi Yang (杨燚)-云服务集团 wrote:
> Thanks Ilya, it works after disable tx offload, the performance is indeed very poor,> about one tenth of ovs kernel. This is a very very strong warning for us, I strongly> suggest ovs document should tell ovs DPDK users the truth in bold word.

The truth is that DPDK is intended to bypass the kernel to achieve performance, but you're going to push all the traffic back to kernel.  In this case you will, obviously, never get performance better than the performance of your kernel anyway (even with offloading support).  So, it makes *no sense* using DPDK in this kind of setup and sending packets back and forth between the kernel and userspace. Just keep everything in kernel.

> 
> For ovn, last year, the information I got is ovn can't support VXLAN, is it true so> far? In my mind, GENEVE is worse than VXLAN as far as the performance is concerned.

At least, it should be much better than pushing all the traffic back to kernel.
If you don't like OVN, use ODL or any other SDN controller.

Best regards, Ilya Maximets.

> 
> -----邮件原件-----
> 发件人: Ilya Maximets [mailto:i.maximets at samsung.com]
> 发送时间: 2019年7月30日 0:18
> 收件人: ovs-dev at openvswitch.org; Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
> 主题: Re: [ovs-dev] How can we improve veth and tap performance in OVS DPDK?
> 
> 
> 
> On 29.07.2019 19:07, Ilya Maximets wrote:
>>> Hi, all
>>> We’re trying OVS DPDK in openstack cloud, but a big warn makes us hesitate.
>>> Floating IP and qrouter use tap interfaces which are attached into 
>>> br-int, SNAT also should use similar way, so OVS DPDK will impact on 
>>> VM network performance significantly, I believe many cloud providers 
>>> have deployed OVS DPDK, my questions are:
>>>
>>> 1.       Do we have some known ways to improve this?
>>
>> As RedHat OSP guide suggests, you could use any SDN controller (like
>> OpenDayLight) or, alternatively, you could use OVN as a network provider for OpenStack.
>> This way all the required functionality will be handled by the 
>> OpenFlow rules inside OVS without necessity to send traffic over veths and taps to Linux Kernel.
>>
>>> 2.       Is there any existing effort for this? Veth in kubernetes should
>>> have the same performance issue in OVS DPDK case.
>>
>> It makes no sense right now to run OVS-DPDK on veth pairs in Kubernetes.
>> The only benefit from OVS-DPDK in K8s might be from using 
>> virtio-vhost-user
> 
> I meant virtio-user ports.
> 
>> ports instead of veths for container networking. But this is not implemented.
>> Running DPDK apps inside K8s containers has a lot of unresolved issues right now.
>>
>> One approach that could improve performance of veths and taps in the 
>> future is using AF_XDP sockets which are supported in OVS now. But 
>> AF_XDP doesn't work properly for virtual interfaces (veths, taps) yet due to issues in Linux Kernel.
>>
>>>
>>> I also found a very weird issue. I added two veth pairs into ovs 
>>> bridge and ovs DPDK bridge, for ovs case, iperf3 can work well, but 
>>> it can’t for OVS DPDK case, what’s wrong.
>>
>> This is exactly same issue as we already discussed previously. 
>> Disable tx offloading on veth pairs and everything will work.
>>
>> Best regards, Ilya Maximets.
>>
>>


More information about the dev mailing list