[ovs-dev] 答复: 答复: [PATCH v6] Use TPACKET_V3 to accelerate veth for userspace datapath
Yi Yang (杨燚)-云服务集团
yangyi01 at inspur.com
Wed Mar 18 02:00:11 UTC 2020
By the way, with tpacket_v3, zero copy optimization and is_pmd=true, the performance is much better, 3.77Gbps, (3.77-1.34)/1.34 = 1.81 , i.e. 181% improvement, here is the performance data.
is_pmd = true
=============
eipadmin at eip01:~$ sudo ./run-iperf3.sh
Connecting to host 10.15.1.3, port 5201
[ 4] local 10.15.1.2 port 43210 connected to 10.15.1.3 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 4.34 GBytes 3.73 Gbits/sec 0 3.03 MBytes
[ 4] 10.00-20.00 sec 4.40 GBytes 3.78 Gbits/sec 0 3.03 MBytes
[ 4] 20.00-30.00 sec 4.40 GBytes 3.78 Gbits/sec 0 3.03 MBytes
[ 4] 30.00-40.00 sec 4.40 GBytes 3.78 Gbits/sec 0 3.03 MBytes
[ 4] 40.00-50.00 sec 4.40 GBytes 3.78 Gbits/sec 0 3.03 MBytes
[ 4] 50.00-60.00 sec 4.40 GBytes 3.78 Gbits/sec 0 3.03 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 26.3 GBytes 3.77 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 26.3 GBytes 3.77 Gbits/sec receiver
Server output:
Accepted connection from 10.15.1.2, port 43208
[ 5] local 10.15.1.3 port 5201 connected to 10.15.1.2 port 43210
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 4.32 GBytes 3.71 Gbits/sec
[ 5] 10.00-20.00 sec 4.40 GBytes 3.78 Gbits/sec
[ 5] 20.00-30.00 sec 4.40 GBytes 3.78 Gbits/sec
[ 5] 30.00-40.00 sec 4.40 GBytes 3.78 Gbits/sec
[ 5] 40.00-50.00 sec 4.40 GBytes 3.78 Gbits/sec
[ 5] 50.00-60.00 sec 4.40 GBytes 3.78 Gbits/sec
iperf Done.
eipadmin at eip01:~$
-----邮件原件-----
发件人: William Tu [mailto:u9012063 at gmail.com]
发送时间: 2020年3月17日 22:58
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
抄送: i.maximets at ovn.org; blp at ovn.org; yang_y_yi at 163.com; ovs-dev at openvswitch.org
主题: Re: [ovs-dev] 答复: [PATCH v6] Use TPACKET_V3 to accelerate veth for userspace datapath
On Tue, Mar 17, 2020 at 2:08 AM Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com> wrote:
>
> Hi, William
>
> Finally, my highend server is available and so I can do performance comparison again, tpacket_v3 obviously has big performance improvement, here is my data. By the way, in order to get stable performance data, please use taskset to pin ovs-vswitchd to a physical core (you shouldn't schedule other task to its logical sibling core for stable performance data), iperf3 client an iperf3 use different cores, for my case, ovs-vswitchd is pinned to core 1, iperf3 server is pinned to core 4, iperf3 client is pinned to core 5.
>
> According to my test, tpacket_v3 can get about 55% improvement (from 1.34 to 2.08, (2.08-1.34)/1.34 = 0.55) , with my further optimization (use zero copy for receive side), it can have more improvement (from 1.34 to 2.21, (2.21-1.34)/1.34 = 0.65), so I still think performance improvement is big, please reconsider it again.
>
That's great improvement.
What is your optimization "zero copy for receive side"?
Does it include in the patch?
Regards
William
More information about the dev
mailing list