[ovs-dev] 答复: [PATCH v7] Use TPACKET_V3 to accelerate veth for userspace datapath

=?gb2312?B?WWkgWWFuZyAo0e6gRCkt1Ma3/s7xvK/NxQ==?= yangyi01 at inspur.com
Wed Mar 18 13:22:22 UTC 2020


Ilya, raw socket for the interface type of which is "system" has been set to
non-block mode, can you explain which syscall will lead to sleep? Yes, pmd
thread will consume CPU resource even if it has nothing to do, but all the
type=dpdk ports are handled by pmd thread, here we just let system
interfaces look like a DPDK interface. I didn't see any problem in my test,
it will be better if you can tell me what will result in a problem and how I
can reproduce it. By the way, type=tap/internal interfaces are still be
handled by ovs-vswitchd thread.

In addition, only one line change is there, ".is_pmd = true,", ".is_pmd =
false," will keep it in ovs-vswitchd if there is any other concern. We can
change non-thread-safe parts to support pmd.

-----邮件原件-----
发件人: dev [mailto:ovs-dev-bounces at openvswitch.org] 代表 Ilya Maximets
发送时间: 2020年3月18日 19:45
收件人: yang_y_yi at 163.com; ovs-dev at openvswitch.org
抄送: i.maximets at ovn.org
主题: Re: [ovs-dev] [PATCH v7] Use TPACKET_V3 to accelerate veth for
userspace datapath

On 3/18/20 10:02 AM, yang_y_yi at 163.com wrote:
> From: Yi Yang <yangyi01 at inspur.com>
> 
> We can avoid high system call overhead by using TPACKET_V3 and using 
> DPDK-like poll to receive and send packets (Note: send still needs to 
> call sendto to trigger final packet transmission).
> 
> From Linux kernel 3.10 on, TPACKET_V3 has been supported, so all the 
> Linux kernels current OVS supports can run
> TPACKET_V3 without any problem.
> 
> I can see about 50% performance improvement for veth compared to last 
> recvmmsg optimization if I use TPACKET_V3, it is about 2.21 Gbps, but 
> it was 1.47 Gbps before.
> 
> After is_pmd is set to true, performance can be improved much more, it 
> is about 180% performance improvement.
> 
> TPACKET_V3 can support TSO, but its performance isn't good because of 
> TPACKET_V3 kernel implementation issue, so it falls back to recvmmsg 
> in case userspace-tso-enable is set to true, but its performance is 
> better than recvmmsg in case userspace-tso-enable is set to false, so 
> just use TPACKET_V3 in that case.
> 
> Note: how much performance improvement is up to your platform, some 
> platforms can see huge improvement, some ones aren't so noticeable, 
> but if is_pmd is set to true, you can see big performance improvement, 
> the prerequisite is your tested veth interfaces should be attached to 
> different pmd threads.
> 
> Signed-off-by: Yi Yang <yangyi01 at inspur.com>
> Co-authored-by: William Tu <u9012063 at gmail.com>
> Signed-off-by: William Tu <u9012063 at gmail.com>
> ---
>  acinclude.m4                     |  12 ++
>  configure.ac                     |   1 +
>  include/sparse/linux/if_packet.h | 111 +++++++++++
>  lib/dp-packet.c                  |  18 ++
>  lib/dp-packet.h                  |   9 +
>  lib/netdev-linux-private.h       |  26 +++
>  lib/netdev-linux.c               | 419
+++++++++++++++++++++++++++++++++++++--
>  7 files changed, 579 insertions(+), 17 deletions(-)
> 
> Changelog:
> - v6->v7
>  * is_pmd is set to true for system interfaces

This can not be done that simple and should not be done unconditionally
anyways.  netdev-linux is not thread safe in many ways.  At least, stats
accounting will be messed up.  Second thing is that this change will harm
all the usual DPDK-based setups since PMD threads will start make a lot of
syscalls and sleep inside the kernel missing packets from the fast DPDK
interfaces.  Third thing is that this change will fire up at least one PMD
thread consuming 100% CPU constantly even for setups where it's not needed.
So, this version is definitely not acceptable.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
dev at openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list