[ovs-dev] 答复: iperf tcp issue on veth using afxdp

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Tue Dec 24 01:13:25 UTC 2019


Thanks Yifeng, good performance number. I'll run it on my machine and feedback you my result.

-----邮件原件-----
发件人: Yifeng Sun [mailto:pkusunyifeng at gmail.com] 
发送时间: 2019年12月24日 6:59
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
抄送: u9012063 at gmail.com; dev at openvswitch.org; i.maximets at ovn.org; echaudro at redhat.com
主题: Re: [ovs-dev] iperf tcp issue on veth using afxdp

Hi Yi,

I don't have OVS DPDK setup yet. I need to set it up first.

On my machine, afxdp can reach 4.6Gbps.

[  3]  0.0- 1.0 sec   564 MBytes  4.73 Gbits/sec
[  3]  1.0- 2.0 sec   553 MBytes  4.64 Gbits/sec
[  3]  2.0- 3.0 sec   558 MBytes  4.68 Gbits/sec
[  3]  3.0- 4.0 sec   556 MBytes  4.66 Gbits/sec
[  3]  4.0- 5.0 sec   545 MBytes  4.57 Gbits/sec
[  3]  5.0- 6.0 sec   554 MBytes  4.64 Gbits/sec
[  3]  6.0- 7.0 sec   548 MBytes  4.60 Gbits/sec
[  3]  7.0- 8.0 sec   548 MBytes  4.60 Gbits/sec
[  3]  8.0- 9.0 sec   550 MBytes  4.62 Gbits/sec
[  3]  9.0-10.0 sec   548 MBytes  4.60 Gbits/sec

Thanks,
Yifeng

On Sun, Dec 22, 2019 at 4:40 PM Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com> wrote:
>
> Hi, Yifeng
>
> I'll try it again. By the way, did you try af_packet for veth in OVS DPDK? In my machine it can reach 4Gbps, do you think af_xdp can reach this number?
>
> -----邮件原件-----
> 发件人: Yifeng Sun [mailto:pkusunyifeng at gmail.com]
> 发送时间: 2019年12月21日 9:11
> 收件人: William Tu <u9012063 at gmail.com>
> 抄送: <dev at openvswitch.org> <dev at openvswitch.org>; Ilya Maximets 
> <i.maximets at ovn.org>; Eelco Chaudron <echaudro at redhat.com>; Yi Yang 
> (杨燚)-云服务集团 <yangyi01 at inspur.com>
> 主题: Re: [ovs-dev] iperf tcp issue on veth using afxdp
>
> This seems to be related to netdev-afxdp's batch size bigger than kernel's xdp batch size.
> I created a patch to fix it.
>
> https://patchwork.ozlabs.org/patch/1214397/
>
> Could anyone take a look at this patch?
>
> Thanks,
> Yifeng
>
> On Fri, Nov 22, 2019 at 9:52 AM William Tu <u9012063 at gmail.com> wrote:
> >
> > Hi Ilya and Eelco,
> >
> > Yiyang reports very poor TCP performance on his setup and I can also 
> > reproduce it on my machine. Somehow I think this might be a kernel 
> > issue, but I don't know where to debug this. Need your suggestion 
> > about how to debug.
> >
> > So the setup is like the system-traffic, creating 2 namespaces and 
> > veth devices and attach to OVS. I do remember to turn off tx offload 
> > and ping, UDP, nc (tcp-mode) works fine.
> >
> > TCP using iperf drops to 0Mbps after 4 seconds.
> > At server side:
> > root at osboxes:~/ovs# ip netns exec at_ns0 iperf -s
> > ------------------------------------------------------------
> > Server listening on TCP port 5001
> > TCP window size:  128 KByte (default)
> > ------------------------------------------------------------
> > [  4] local 10.1.1.1 port 5001 connected with 10.1.1.2 port 40384 
> > Waiting for server threads to complete. Interrupt again to force quit.
> >
> > At client side
> > root at osboxes:~/bpf-next# ip netns exec at_ns1 iperf -c 10.1.1.1 -i 1 
> > -t 10
> > ------------------------------------------------------------
> > Client connecting to 10.1.1.1, TCP port 5001 TCP window size: 85.0 
> > KByte (default)
> > ------------------------------------------------------------
> > [  3] local 10.1.1.2 port 40384 connected with 10.1.1.1 port 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0- 1.0 sec  17.0 MBytes   143 Mbits/sec
> > [  3]  1.0- 2.0 sec  9.62 MBytes  80.7 Mbits/sec [  3]  2.0- 3.0 sec
> > 6.75 MBytes  56.6 Mbits/sec [  3]  3.0- 4.0 sec  11.0 MBytes  92.3 
> > Mbits/sec [  3]  5.0- 6.0 sec  0.00 Bytes  0.00 bits/sec [  3]  6.0-
> > 7.0 sec  0.00 Bytes  0.00 bits/sec [  3]  7.0- 8.0 sec  0.00 Bytes
> > 0.00 bits/sec [  3]  8.0- 9.0 sec  0.00 Bytes  0.00 bits/sec [  3]
> > 9.0-10.0 sec  0.00 Bytes  0.00 bits/sec [  3] 10.0-11.0 sec  0.00 
> > Bytes  0.00 bits/sec
> >
> > (after this, even ping stops working)
> >
> > Script to reproduce
> > -------------------------
> > ovs-vsctl -- add-br br0 -- set Bridge br0 datapath_type=netdev
> >
> > ip netns add at_ns0
> > ip link add p0 type veth peer name afxdp-p0 ip link set p0 netns
> > at_ns0 ip link set dev afxdp-p0 up ovs-vsctl add-port br0 afxdp-p0
> >
> > ovs-vsctl -- set interface afxdp-p0 options:n_rxq=1 type="afxdp"
> > options:xdp-mode=native
> > ip netns exec at_ns0 sh << NS_EXEC_HEREDOC ip addr add "10.1.1.1/24"
> > dev p0 ip link set dev p0 up NS_EXEC_HEREDOC
> >
> > ip netns add at_ns1
> > ip link add p1 type veth peer name afxdp-p1 ip link set p1 netns
> > at_ns1 ip link set dev afxdp-p1 up ovs-vsctl add-port br0 afxdp-p1 
> > -- \
> >                set interface afxdp-p1 options:n_rxq=1 type="afxdp"
> > options:xdp-mode=native
> >
> > ip netns exec at_ns1 sh << NS_EXEC_HEREDOC ip addr add "10.1.1.2/24"
> > dev p1 ip link set dev p1 up NS_EXEC_HEREDOC
> >
> > ethtool -K afxdp-p0 tx off
> > ethtool -K afxdp-p1 tx off
> > ip netns exec at_ns0 ethtool -K p0 tx off ip netns exec at_ns1 
> > ethtool -K p1 tx off
> >
> > ip netns exec at_ns0 ping  -c 10 -i .2 10.1.1.2 echo "ip netns exec
> > at_ns1 iperf -c 10.1.1.1 -i 1 -t 10"
> > ip netns exec at_ns0 iperf -s
> >
> > Thank you
> > William
> > _______________________________________________
> > dev mailing list
> > dev at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list