[ovs-dev] 答复: 答复: [PATCH v6] Use TPACKET_V3 to accelerate veth for userspace datapath

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Wed Mar 18 00:58:55 UTC 2020

William, are you trying my patch for zero copy? I can send you for a try on your platform. Per your af_xdp change, I find dp_packet can use pre-allocated buffer, so I used that way, because tpacket_v3 has setup rx ring there, so dp_packet can directly use those rx ring buffer.

发件人: William Tu [mailto:u9012063 at gmail.com] 
发送时间: 2020年3月17日 22:58
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
抄送: i.maximets at ovn.org; blp at ovn.org; yang_y_yi at 163.com; ovs-dev at openvswitch.org
主题: Re: [ovs-dev] 答复: [PATCH v6] Use TPACKET_V3 to accelerate veth for userspace datapath

On Tue, Mar 17, 2020 at 2:08 AM Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com> wrote:
> Hi, William
> Finally, my highend server is available and so I can do performance comparison again, tpacket_v3 obviously has big performance improvement, here is my data. By the way, in order to get stable performance data, please use taskset to pin ovs-vswitchd to a physical core (you shouldn't schedule other task to its logical sibling core for stable performance data), iperf3 client an iperf3 use different cores, for my case, ovs-vswitchd is pinned to core 1, iperf3 server is pinned to core 4, iperf3 client is pinned to core 5.
> According to my test, tpacket_v3 can get about 55% improvement (from 1.34 to 2.08,  (2.08-1.34)/1.34 = 0.55) , with my further optimization (use zero copy for receive side), it can have more improvement (from 1.34 to 2.21, (2.21-1.34)/1.34 = 0.65), so I still think performance improvement is big, please reconsider it again.

That's great improvement.
What is your optimization "zero copy for receive side"?
Does it include in the patch?


More information about the dev mailing list