[ovs-discuss] Packet drops during non-exhaustive flood with OVS and 1.8.0

Andrey Korolyov andrey at xdel.ru
Thu Jan 22 17:11:56 UTC 2015


On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov <andrey at xdel.ru> wrote:
> Hello,
>
> I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to
> drop packets earlier than a regular Linux ixgbe 10G interface, setup
> follows:
>
> receiver/forwarder:
> - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively
> - n-dpdk-rxqs=6, rx scattering is not enabled
> - x520 da
> - 3.10/3.18 host kernel
> - during 'legacy mode' testing, queue interrupts are scattered through all cores
>
> sender:
> - 16-core E52630, netmap framework for packet generation
> - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d
> 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R
> 11000000, results in 11Mpps 60-byte packet flood, there are constant
> values during test.
>
> OVS contains only single drop rule at the moment:
> ovs-ofctl add-flow br0 in_port=1,actions=DROP
>
> Packet generator was launched for tens of seconds for both Linux stack
> and OVS+DPDK cases, resulting in zero drop/error count on the
> interface in first, along with same counter values on pktgen and host
> interface stat (means that the none of generated packets are
> unaccounted).
>
> I selected rate for about 11M because OVS starts to do packet drop
> around this value, after same short test interface stat shows
> following:
>
> statistics          : {collisions=0, rx_bytes=22003928768,
> rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0,
> rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0,
> tx_errors=0, tx_packets=0}
>
> pktgen side:
> Sent 354506080 packets, 60 bytes each, in 32.23 seconds.
> Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps)
>
> If rate will be increased up to 13-14Mpps, the relative error/overall
> ratio will rise up to a one third. So far OVS on dpdk shows perfect
> results and I do not want to reject this solution due to exhaustive
> behavior like described one, so I`m open for any suggestions to
> improve the situation (except using 1.7 branch :) ).

At a glance it looks like there is a problem with pmd threads, as they
starting to consume about five thousandth of sys% on a dedicated cores
during flood but in theory they should not. Any ideas for
debugging/improving this situation are very welcomed!



More information about the discuss mailing list