[ovs-discuss] 答复: [ovs-dev] OVS performance issue: why small udp packet pps performance between VMs is highly related with number of ovs ports and number of VMs?

Yi Yang (杨燚)-云服务集团 yangyi01 at inspur.com
Thu Feb 13 13:45:15 UTC 2020


Flavio, this is an openstack environment, all the flows are added by neutron, NORMAL action is default flow before neutron adds any flow, this is ovs default flow.

-----邮件原件-----
发件人: Flavio Leitner [mailto:fbl at sysclose.org] 
发送时间: 2020年2月13日 19:48
收件人: Yi Yang (杨燚)-云服务集团 <yangyi01 at inspur.com>
抄送: ovs-discuss at openvswitch.org; ovs-dev at openvswitch.org; i.maximets at ovn.org
主题: Re: [ovs-dev] OVS performance issue: why small udp packet pps performance between VMs is highly related with number of ovs ports and number of VMs?

On Thu, Feb 13, 2020 at 09:18:38AM +0000, Yi Yang (杨燚)-云服务集团 wrote:
> Hi, all
> 
> We find ovs has serious performance issue, we only launch one VM in 
> one compute, and do iperf small udp pps performance test between these 
> two VMs, we can see about 180000 pps (packets per second, -l 16), but
> 
> 1) if we add 100 veth ports in br-int bridge, respectively, then the pps performance will be about 50000 pps.
> 2) If we launch one more VM in every compute node, but don’t run any 
> workload, the pps performance will be about 90000 pps. (note, no above 
> veth ports in this test)
> 3) If we launch two more VMs in every compute node (totally 3 VMs 
> every compute nodes), but don’t run any workload , the pps performance 
> will be about 50000 pps (note, no above veth ports in this test)
> 
> Anybody can help explain why it is so? Is there any known way to 
> optimized this? I really think ovs performance is bad (we can draw 
> such conclusion from our test result at least), I don’t want to defame 
> ovs ☺
> 
> BTW, we used ovs kernel datapath and vhost, we can see every port has a vhost kernel thread, it is running with 100% cpu utilization if we run iperf in VM, bu for those idle VMs, the corresponding vhost still has about 30% cpu utilization, I don’t understand why.
> 
> In addition, we find udp performance is also very bad for small UDP packet for physical NIC. But it can reach 260000 pps for –l 80 which enough covers vxlan header (8 bytes) + inner eth header (14) + ipudp header (28) + 16 = 66, if we consider performance overhead ovs bridge introduces, pps performance between VMs should be able to reach 200000 pps at least, other VMs and ports shouldn’t have so big hurt against it because they are idle, no any workload there.

What do you have in the flow table?  It sounds like the traffic is being broadcast to all ports. Check the FDB to see if OvS is learning the mac addresses.

It's been a while since I don't run performance tests with kernel datapath, but it should be no different than Linux bridge with just action NORMAL in the flow table.

--
fbl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3600 bytes
Desc: not available
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20200213/d4be37ab/attachment.p7s>


More information about the discuss mailing list