[ovs-dev] 答复: [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

=?gb2312?B?WWkgWWFuZyAo0e6gRCkt1Ma3/s7xvK/NxQ==?= yangyi01 at inspur.com
Wed Feb 26 01:06:56 UTC 2020


This is the result in my VM for the case without this patch. Retr number is
really very very high, this is super abnormal. Physical machine has same
phenomenon. 

vagrant at ubuntu1804:~$ sudo ./run-iperf3.sh
Connecting to host 10.15.1.3, port 5201
[  4] local 10.15.1.2 port 54566 connected to 10.15.1.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-10.00  sec  4.07 GBytes  3.50 Gbits/sec  37926    170 KBytes
[  4]  10.00-20.00  sec  3.62 GBytes  3.11 Gbits/sec  32138    170 KBytes
[  4]  20.00-30.00  sec  3.81 GBytes  3.27 Gbits/sec  36448    235 KBytes
[  4]  30.00-40.00  sec  4.01 GBytes  3.45 Gbits/sec  38133    153 KBytes
[  4]  40.00-50.00  sec  3.94 GBytes  3.39 Gbits/sec  39605    184 KBytes
[  4]  50.00-60.00  sec  3.74 GBytes  3.21 Gbits/sec  33365    185 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  23.2 GBytes  3.32 Gbits/sec  217615
sender
[  4]   0.00-60.00  sec  23.2 GBytes  3.32 Gbits/sec
receiver

Server output:
Accepted connection from 10.15.1.2, port 54564
[  5] local 10.15.1.3 port 5201 connected to 10.15.1.2 port 54566
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.00  sec  4.07 GBytes  3.49 Gbits/sec
[  5]  10.00-20.00  sec  3.62 GBytes  3.11 Gbits/sec
[  5]  20.00-30.00  sec  3.81 GBytes  3.27 Gbits/sec
[  5]  30.00-40.00  sec  4.01 GBytes  3.45 Gbits/sec
[  5]  40.00-50.00  sec  3.94 GBytes  3.39 Gbits/sec
[  5]  50.00-60.00  sec  3.74 GBytes  3.21 Gbits/sec
[  5]  60.00-60.00  sec   127 KBytes  1.25 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-60.00  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-60.00  sec  23.2 GBytes  3.32 Gbits/sec
receiver


iperf Done.
vagrant at ubuntu1804:~$

-----邮件原件-----
发件人: dev [mailto:ovs-dev-bounces at openvswitch.org] 代表 William Tu
发送时间: 2020年2月26日 6:32
收件人: yang_y_yi at 126.com
抄送: yang_y_yi <yang_y_yi at 163.com>; ovs-dev <ovs-dev at openvswitch.org>
主题: Re: [ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for
userspace datapath

On Mon, Feb 24, 2020 at 5:01 AM <yang_y_yi at 126.com> wrote:
>
> From: Yi Yang <yangyi01 at inspur.com>
>
> We can avoid high system call overhead by using TPACKET_V3 and using 
> DPDK-like poll to receive and send packets (Note: send still needs to 
> call sendto to trigger final packet transmission).
>
> From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the 
> Linux kernels current OVS supports can run
> TPACKET_V3 without any problem.
>
> I can see about 30% performance improvement for veth compared to last 
> recvmmsg optimization if I use TPACKET_V3, it is about 1.98 Gbps, but 
> it was 1.47 Gbps before.
>
> TPACKET_V3 can support TSO, it can work only if your kernel can 
> support, this has been verified on Ubuntu 18.04 5.3.0-40-generic , if 
> you find the performance is very poor, please turn off tso for veth 
> interfces in case userspace-tso-enable is set to true.

Do you test the performance of enabling TSO?

Using veth (like your run-iperf3.sh) and with kernel 5.3.
Without your patch, with TSO enabled, I can get around 6Gbps But with this
patch, with TSO enabled, the performance drops to 1.9Gbps.

Regards,
William
_______________________________________________
dev mailing list
dev at openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list