[ovs-dev] [PATCH v9 0/7] OVS-DPDK flow offload with rte_flow

Flavio Leitner fbl at sysclose.org
Fri May 18 22:46:56 UTC 2018



Hello,

I looked at the patchset (v9) and I found no obvious problems, but I
miss some instrumentation to understand what is going on. For example,
how many flows are offloaded, or how many per second, etc... We can
definitely work on that as a follow up.

I have a MLX5 (16.20.1010) which is connected to a traffic generator.
The results are unexpected and I don't know why yet. I will continue
on it next week.

The flow is pretty simple, just echo the packet back:

   ovs-ofctl add-flow ovsbr0 in_port=10,action=output:in_port

This is the result without HW offloading enabled:
Partial:  14619675.00 pps  7485273570.00 bps
Partial:  14652351.00 pps  7502003940.00 bps
Partial:  14655548.00 pps  7503640570.00 bps
Partial:  14679556.00 pps  7515932630.00 bps
Partial:  14681188.00 pps  7516768670.00 bps
Partial:  14597427.00 pps  7473882390.00 bps
Partial:  14712617.00 pps  7532860090.00 bps

pmd thread numa_id 0 core_id 2:
        packets received: 53859055
        packet recirculations: 0
        avg. datapath passes per packet: 1.00
        emc hits: 53859055
        megaflow hits: 0
        avg. subtable lookups per megaflow hit: 0.00
        miss with success upcall: 0
        miss with failed upcall: 0
        avg. packets per output batch: 28.20
        idle cycles: 0 (0.00%)
        processing cycles: 12499399115 (100.00%)
        avg cycles per packet: 232.08 (12499399115/53859055)
        avg processing cycles per packet: 232.08 (12499399115/53859055)


Based on the stats, it seems 14.7Mpps is the maximum a core can do it.

This is the result with HW Offloading enabled:

Partial:  10713500.00 pps  5485312330.00 bps
Partial:  10672185.00 pps  5464158240.00 bps
Partial:  10747756.00 pps  5502850960.00 bps
Partial:  10713404.00 pps  5485267400.00 bps


pmd thread numa_id 0 core_id 2:
        packets received: 25902718
        packet recirculations: 0
        avg. datapath passes per packet: 1.00
        emc hits: 25902697
        megaflow hits: 0
        avg. subtable lookups per megaflow hit: 0.00
        miss with success upcall: 0
        miss with failed upcall: 0
        avg. packets per output batch: 28.11
        idle cycles: 0 (0.00%)
        processing cycles: 12138284463 (100.00%)
        avg cycles per packet: 468.61 (12138284463/25902718)
        avg processing cycles per packet: 468.61 (12138284463/25902718)

2018-05-18T22:34:57.865Z|00001|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid caaf720e-5dfe-4879-adb9-155bd92f9b    40 was not found

2018-05-18T22:35:02.920Z|00002|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid c75ae6c5-1d14-40ce-b4c7-6d5001a458    4c was not found

2018-05-18T22:35:05.160Z|00105|memory|INFO|109700 kB peak resident
set size after 10.4 seconds

2018-05-18T22:35:05.160Z|00106|memory|INFO|handlers:1 ports:3
revalidators:1 rules:5 udpif keys:2

2018-05-18T22:35:21.910Z|00003|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 73bdddc9-b12f-4007-9f12-b66b4bc189    3e was not found

2018-05-18T22:35:21.924Z|00004|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creati    on failure


Looks like offloading didn't work and now the tput rate is lower,
which is not expected either.

I plan to continue on this and I appreciate if you have any idea of
what is going on.

Thanks,
fbl



More information about the dev mailing list