[ovs-dev] 答复: [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Fri Feb 28 00:46:21 UTC 2020


William. here I don't use my patch, I just showed you tap is ok, veth is not ok, by capturing packets, I'm very sure the packets are truncated, veth's packets are different from tap's, during big packets, all the packet sizes are about 64K, but not such pattern, 1514 followed a big packet, so I think the code for veth is wrong. Yes, I'm verifying it, will send out a patch to fix this issue once it is verified.

-----邮件原件-----
发件人: William Tu [mailto:u9012063 at gmail.com] 
发送时间: 2020年2月27日 23:30
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
抄送: yang_y_yi at 126.com; yang_y_yi at 163.com; ovs-dev at openvswitch.org
主题: Re: [ovs-dev] [PATCH v5] Use TPACKET_V3 to accelerate veth for userspace datapath

On Tue, Feb 25, 2020 at 5:41 PM Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com> wrote:
>
> In the same environment, but I used tap but not veth, retr number is 0 
> for the case without this patch (of course, I applied Flavio's tap 
> enable patch)
>

Right, because tap does not use the tpacket_v3 mmap packet, so it works fine.

> vagrant at ubuntu1804:~$ sudo ./run-iperf3.sh Connecting to host 
> 10.15.1.3, port 5201 [  4] local 10.15.1.2 port 54572 connected to 
> 10.15.1.3 port 5201
> [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
> [  4]   0.00-10.00  sec  12.6 GBytes  10.9 Gbits/sec    0   3.14 MBytes
> [  4]  10.00-20.00  sec  12.8 GBytes  11.0 Gbits/sec    0   3.14 MBytes
> [  4]  20.00-30.00  sec  10.2 GBytes  8.76 Gbits/sec    0   3.14 MBytes
> [  4]  30.00-40.00  sec  10.0 GBytes  8.63 Gbits/sec    0   3.14 MBytes
> [  4]  40.00-50.00  sec  10.4 GBytes  8.94 Gbits/sec    0   3.14 MBytes
> [  4]  50.00-60.00  sec  10.8 GBytes  9.31 Gbits/sec    0   3.14 MBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bandwidth       Retr
> [  4]   0.00-60.00  sec  67.0 GBytes  9.59 Gbits/sec    0             sender
> [  4]   0.00-60.00  sec  67.0 GBytes  9.59 Gbits/sec
> receiver
>
<snip>
> >
> > I can see about 30% performance improvement for veth compared to 
> > last recvmmsg optimization if I use TPACKET_V3, it is about 1.98 
> > Gbps, but it was 1.47 Gbps before.
> >
> > TPACKET_V3 can support TSO, it can work only if your kernel can 
> > support, this has been verified on Ubuntu 18.04 5.3.0-40-generic , 
> > if you find the performance is very poor, please turn off tso for 
> > veth interfces in case userspace-tso-enable is set to true.
>
> Do you test the performance of enabling TSO?
>
> Using veth (like your run-iperf3.sh) and with kernel 5.3.
> Without your patch, with TSO enabled, I can get around 6Gbps But with 
> this patch, with TSO enabled, the performance drops to 1.9Gbps.
>

Are you investigating this issue?
William


More information about the dev mailing list