[ovs-dev] 答复: 答复: Why is ovs DPDK much worse than ovs in my test case?

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Fri Jul 12 00:44:33 UTC 2019


Ilya, you're right, I captured 64K packets although MTU is 1500 when I use ovs-kernel, but packet size is always <1500 in most cases when I use ovs-DPDK.

00:34:33.331360 IP 192.168.200.101.48968 > 192.168.230.101.5201: Flags [.], seq 17462881:17528041, ack 0, win 229, options [nop,nop,TS val 148218621 ecr 148145855], length 65160

00:34:33.332064 IP 192.168.200.101.48968 > 192.168.230.101.5201: Flags [.], seq 17528041:17588857, ack 0, win 229, options [nop,nop,TS val 148218621 ecr 148145855], length 60816

Thank you so much, I will use e1000 for this. It will be great if OVS DPDK can handle it in the same way as kernel does, otherwise it will break people's sense for OVS DPDK, it shocked me at least.

-----邮件原件-----
发件人: Ilya Maximets [mailto:i.maximets at samsung.com] 
发送时间: 2019年7月11日 15:35
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>; ovs-dev at openvswitch.org
主题: Re: 答复: [ovs-dev] Why is ovs DPDK much worse than ovs in my test case?

On 11.07.2019 3:27, Yi Yang (杨燚)-云服务集团 wrote:
> BTW, offload features are on in my test client1 and server1 (iperf 
> server)
> 
...
> -----邮件原件-----
> 发件人: Yi Yang (杨燚)-云服务集团
> 发送时间: 2019年7月11日 8:22
> 收件人: i.maximets at samsung.com; ovs-dev at openvswitch.org
> 抄送: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
> 主题: 答复: [ovs-dev] Why is ovs DPDK much worse than ovs in my test case?
> 重要性: 高
> 
> Ilya, thank you so much, using 9K MTU for all the virtio interfaces in transport path does help (including DPDK port), the data is here.

8K usually works a bit better for me than 9K. Probably, because of the page size.

Have you configured MTU for the tap interfaces on host side too just in case that host kernel doesn't negotiate the MTU with guest?

> 
> vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101
> ------------------------------------------------------------
> Client connecting to 192.168.230.101, TCP port 5001 TCP window size:  
> 325 KByte (default)
> ------------------------------------------------------------
> [  3] local 192.168.200.101 port 53956 connected with 192.168.230.101 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec   315 MBytes   264 Mbits/sec
> [  3] 10.0-20.0 sec   333 MBytes   280 Mbits/sec
> [  3] 20.0-30.0 sec   300 MBytes   252 Mbits/sec
> [  3] 30.0-40.0 sec   307 MBytes   258 Mbits/sec
> [  3] 40.0-50.0 sec   322 MBytes   270 Mbits/sec
> [  3] 50.0-60.0 sec   316 MBytes   265 Mbits/sec
> [  3]  0.0-60.0 sec  1.85 GBytes   265 Mbits/sec
> vagrant at client1:~$
> 
> But it is still much worse than ovs kernel. In my test case, I used VirtualBox network, the whole transport path traverses several different VMs, every VM has turned on offload features except ovs DPDK VM, I understand tso offload should be done on send side, so when the packet is sent out from the send side or receive side, it has been segmented by tso to adapt to path MTU, so in ovs kernel VM/ovs DPDK VM, the packet size has been MTU of ovs port/DPDK port, so it needn't do tso work, right?

Not sure if I understand the question correctly, but I'll try to clarify. I assume that all your VMs located on the same physical host.
Linux kernel is smart and it will not segment the packets until it is unavoidable. If all the interfaces on a packet path supports TSO, kernel will never segment packets and will always traverse 64K packets all the way from the iperf client to iperf server.
In case of OVS with DPDK its VM doesn't support TSO. This way packets will be splitted into segments to fit MTU before sending to that VM.

The key point here is the virtio interfaces you're using for VMs.
virtio-net is a para-virtual network interface. This means that the guest knows that interface is virtual and it knows that host is able to receive packets larger than MTU if offloading was negotiated.
At the same time host knows that guest is able to receive packets larger than MTU too. So, nothing will be segmented.

In case of OVS with DPDK host knows that guest is not able to receive packets larger than MTU and splits them before sending.

You can't send packets larger than MTU to physical network, but you able to do that with virtual network if it was negotiated.


Best regards, Ilya Maximets.


More information about the dev mailing list