[ovs-dev] Conntrack performance drop in OVS 2.8

Darrell Ball dlu998 at gmail.com
Mon Jul 16 15:46:04 UTC 2018


On Fri, Jun 29, 2018 at 2:29 AM, Nitin Katiyar <nitin.katiyar at ericsson.com>
wrote:

> Hi,
> The performance of OVS 2.8 (with DPDK 17.05.02) with conntrack
> configuration has dropped significantly (especially for single flow case)
> as compared to OVS 2.6 (with DPDK 16.11.4).
>
> Following is the comparison between 2.6.2 and 2.8.2.
> PKT Size               # of Flows     MPPS (OVS 2.6)   MPPS (OVS 2.8)
>  % Drop
> ============================================================
> 64                           1                              3.37
>               1.86                        44.71
> 128                         1                              3.09
>             1.74                        43.52
> 256                         1                              2.66
>             1.15                        56.84
> 64                           10000                    1.73
>         1.51                        13.03
> 128                         10000                    1.68
>       1.46                        12.65
> 256                         10000                    1.55
>       1.34                        13.60
>
>
> OVS is configured with 2 DPDK ports (10G -Intel 82599)bonded in bond-slb
> mode and 1 VHU port. The VM is running the testpmd and it echoes the UDP
> packets.
>
> I used following OF rules.
>
> ovs-ofctl add-flow br-int "table=0, priority=10,ct_state=-trk,udp,
> actions=ct(table=1)"
> ovs-ofctl add-flow br-int "table=1, priority=1000,ct_state=+new+
> trk,udp,in_port=10,actions=strip_vlan,ct(commit),output:101"
>
> ovs-ofctl add-flow br-int "table=1, priority=900,ct_state=+est+
> trk,in_port=10,actions=strip_vlan,output:101"
> ovs-ofctl add-flow br-int "table=1, priority=900,ct_state=+est+
> trk,in_port=101,actions=push_vlan:0x8100,mod_vlan_vid:4044,output:10".
>
> There are 2 bridges configured in OVS which are connected through patch
> ports. Port 10 above is patch port and 101 is vhu port. OVS is configured
> with 2 DPDK ports (10G -Intel 82599) bonded in bond-slb mode and 1 VHU
> port. The VM is running the testpmd which echoes the UDP packets.
>
> The generator (running on different server) is sending UDP traffic by
> altering the udp source port.
>
> Has anyone else experienced the similar behavior?
>
> Regards,
> Nitin
>
>
>

Using a slight older nic, to try to somewhat match what you are using:
X540/Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
To isolate some of the many variables in your test, I did some phy-phy
tests with 64 bytes packets
Bidirectional, with matching reverse flows.
I added the obvious baseline comparisons.

1 flow tests are mostly useful for checking flow scale agnostic effects.
However, practical measurements start in the 1,000’s.


Flows 2.6 cross-connect 2.6 cross-connect
 with conntrack 2.8 cross-connect 2.8 cross-connect
 with conntrack Master (61b1c7acb9a2)
 cross-connect Master (61b1c7acb9a2)
cross-connect with conntrack
1 7551585 3243008 6912447(-8.5%) 2812368(-13.3%) 5988160(-20.8%)
2582560(-20.4%)
1000 6069591 2002912 5430869(-10.4%) 1931200(-3.5%) 5181312(-14.5%)
1936934(-3.2%)
2000 5914288 1879921 5130196(-13.2%) 1841352(-2.1%) 4896552(-17.1%)
1747552(-7.1%)
10000 4415296 1357280 4471424(+1.4%) 1448480(+6.7) 3938024(-10.8%)
1350431(-0.5%)
20000 3863738 1304448 4163744(+7.7%) 1376320(+5.5) 3709152(-3.9%)
1282400(-1.7%)



>
>
>
>
> I have simple configuration of SUT with one VM running testpmd echoing
> traffic on top of OVS.
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>


More information about the dev mailing list