[ovs-dev] OVS DPDK performance drop with multiple flows

Bodireddy, Bhanuprakash bhanuprakash.bodireddy at intel.com
Thu Aug 18 15:59:09 UTC 2016


Hello All,

I found significant performance drop using OVS DPDK when testing with multiple IXIA streams and matching flow rules.
Example: For a packet stream with src ip 2.2.2.1 and dst ip 3.3.3.1,  corresponding flow rule is set up as below.
    $ ovs-ofctl add-flow br0 dl_type=0x0800,nw_src=2.2.2.1,actions=output:2

>From the implementation, I see that post the emc_lookup(), the packets are batched matching the flow and get
processed in 'batches' with packet_batch_execute().

In OVS 2.6, during the testing I observed that with only few packets in a batch the netdev_send() gets called which
internally invokes  rte_eth_tx_burst() that incurs an expensive MMIO write.  I was told that OVS 2.5 has  intermediate
queue feature enabled that  queues and burst as many packets as it can to amortize the cost of MMIO write. When tested
on OVS 2.5  performance  drop is still noticed inspite of intermediate queue implementation due to below reason.

With single queue in use  txq_needs_locking is 'false' and flush_tx is always 'true'. With flush_tx always 'true' the
Intermediate queue flushes packets for each batch using dpdk_queue_flush__() instead of queueing packets and the
behavior is same as OVS 2.6. This may not be the idea behind the initial implementation of intermediate queue logic
with dpdk_queue_pkts().

Appreciate your comments on this.

Regards,
Bhanu Prakash.




More information about the dev mailing list