[ovs-discuss] OVS DPDK performance with SELECT group

Gregory Rose gvrose8192 at gmail.com
Tue Nov 26 23:36:43 UTC 2019


On 11/26/2019 7:41 AM, Rami Neiman wrote:
>
> Hello,
>
> I am using OVS DPDK 2.9.2 with TRex traffic generator to simply 
> forward the received traffic back to the traffic generator (i.e. 
> ingress0->egeress0, egress0->ingress0) over 2 port 10G NIC.
>
> The OVS throughput with this setup matches the traffic generator (all 
> packets sent by TG are received). And we are getting around 2.5Mpps of 
> traffic forwarded fine (we can probably go even higher, so that’s not 
> a limit).
>
> Our next goal is to have the TG traffic also mirrored over additional 
> 2 10G ports to a monitoring device and we use SELECT group to achieve 
> load balancing of mirrored traffic. When we add the group as follows:
>

Putting all that on a single NIC might be overwhelming the PCIE 
bandwidth.  Something to check.

- Greg

> ovs-ofctl -O OpenFlow13 add-group br0 
> group_id=5,type=select,bucket=output:mirror0,bucket=output:mirror1
>
> ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
> metadata=0,in_port=egress0,actions=group:5,output:ingress0"
>
> ovs-ofctl -O OpenFlow13 add-flow br0 "table=5, 
> metadata=0,in_port=ingress0,actions=group:5,output:egress0"
>
> mirror0 and mirror1 being our mirror ports. The mirroring works as 
> expected, however the OVS throughput drops to less than 500 Kpps (as 
> reported by the traffic generator).
>
> The ingress0 and egress0 (i.e. ports that receive traffic) show 
> packets being dropped in large numbers. Adding more pmd cores and 
> distributing Rx queues among them has no effect. Changing the hash 
> fields of the SELECT group has no effect either.
>
> My question is: is there a way to give more cores/memory or otherwise 
> influence the hash calculation and SELECT group action to make it more 
> performant? less than 500Kpps seems like a very low number.
>
> Just in case, here’s the output of the most important statistics commands:
>
> ovs-vsctl --column statistics list interface egress0
>
> statistics : {flow_director_filter_add_errors=0, 
> flow_director_filter_remove_errors=0, mac_local_errors=17, 
> mac_remote_errors=1, "rx_128_to_255_packets"=3936120, 
> "rx_1_to_64_packets"=14561687, "rx_256_to_511_packets"=1624884, 
> "rx_512_to_1023_packets"=2180436, "rx_65_to_127_packets"=21519189, 
> rx_broadcast_packets=17, rx_bytes=23487692367, rx_crc_errors=0, 
> rx_dropped=23759559, rx_errors=0, rx_fcoe_crc_errors=0, 
> rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
> rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
> rx_length_errors=0, rx_mac_short_packet_dropped=0, 
> rx_management_dropped=0, rx_management_packets=0, 
> rx_mbuf_allocation_errors=0, rx_missed_errors=23759559, 
> rx_oversize_errors=0, rx_packets=39363905, 
> "rx_priority0_dropped"=23759559, 
> "rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
> "rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
> "rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
> "rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
> "rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
> "rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
> "rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
> "rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
> "tx_128_to_255_packets"=1549647, "tx_1_to_64_packets"=10995089, 
> "tx_256_to_511_packets"=7309468, "tx_512_to_1023_packets"=739062, 
> "tx_65_to_127_packets"=7837579, tx_broadcast_packets=6, 
> tx_bytes=28481732482, tx_dropped=0, tx_errors=0, 
> tx_management_packets=0, tx_multicast_packets=0, tx_packets=43936201}
>
> ovs-vsctl --column statistics list interface ingress0
>
> statistics : {flow_director_filter_add_errors=0, 
> flow_director_filter_remove_errors=0, mac_local_errors=37, 
> mac_remote_errors=1, "rx_128_to_255_packets"=2778420, 
> "rx_1_to_64_packets"=18198197, "rx_256_to_511_packets"=13168041, 
> "rx_512_to_1023_packets"=886524, "rx_65_to_127_packets"=14853438, 
> rx_broadcast_packets=17, rx_bytes=28481734408, rx_crc_errors=0, 
> rx_dropped=22718779, rx_errors=0, rx_fcoe_crc_errors=0, 
> rx_fcoe_dropped=0, rx_fcoe_mbuf_allocation_errors=0, 
> rx_fragment_errors=0, rx_illegal_byte_errors=0, rx_jabber_errors=0, 
> rx_length_errors=0, rx_mac_short_packet_dropped=0, 
> rx_management_dropped=0, rx_management_packets=0, 
> rx_mbuf_allocation_errors=0, rx_missed_errors=22718779, 
> rx_oversize_errors=0, rx_packets=43936225, 
> "rx_priority0_dropped"=22718779, 
> "rx_priority0_mbuf_allocation_errors"=0, "rx_priority1_dropped"=0, 
> "rx_priority1_mbuf_allocation_errors"=0, "rx_priority2_dropped"=0, 
> "rx_priority2_mbuf_allocation_errors"=0, "rx_priority3_dropped"=0, 
> "rx_priority3_mbuf_allocation_errors"=0, "rx_priority4_dropped"=0, 
> "rx_priority4_mbuf_allocation_errors"=0, "rx_priority5_dropped"=0, 
> "rx_priority5_mbuf_allocation_errors"=0, "rx_priority6_dropped"=0, 
> "rx_priority6_mbuf_allocation_errors"=0, "rx_priority7_dropped"=0, 
> "rx_priority7_mbuf_allocation_errors"=0, rx_undersize_errors=0, 
> "tx_128_to_255_packets"=1793095, "tx_1_to_64_packets"=7027091, 
> "tx_256_to_511_packets"=783763, "tx_512_to_1023_packets"=1133960, 
> "tx_65_to_127_packets"=14219400, tx_broadcast_packets=6, 
> tx_bytes=23487691707, tx_dropped=0, tx_errors=0, 
> tx_management_packets=0, tx_multicast_packets=0, tx_packets=39363894}
>
> ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 2:
>
>         isolated : true
>
>         port: egress0 queue-id:  0    pmd usage:  0 %
>
>         port: ingress0          queue-id: 0    pmd usage:  0 %
>
>         port: mirror0 queue-id:  0    pmd usage:  0 %
>
> pmd thread numa_id 0 core_id 3:
>
>         isolated : true
>
>         port: egress0 queue-id:  1    pmd usage:  0 %
>
>         port: ingress0          queue-id: 1    pmd usage:  0 %
>
>         port: mirror1 queue-id:  0    pmd usage:  0 %
>
> pmd thread numa_id 0 core_id 4:
>
>         isolated : true
>
>         port: egress0 queue-id:  2    pmd usage:  0 %
>
>         port: ingress0          queue-id: 2    pmd usage:  0 %
>
> pmd thread numa_id 0 core_id 5:
>
>         isolated : true
>
>         port: egress0 queue-id:  3    pmd usage:  0 %
>
>         port: ingress0          queue-id: 3    pmd usage:  0 %
>
> ovs-appctl dpif-netdev/pmd-stats-show
>
> pmd thread numa_id 0 core_id 2:
>
>         packets received: 21323462
>
>         packet recirculations: 0
>
>         avg. datapath passes per packet: 1.00
>
>         emc hits: 5119195
>
>         megaflow hits: 8461953
>
>         avg. subtable lookups per megaflow hit: 1.01
>
>         miss with success upcall: 2286723
>
>         miss with failed upcall: 5455591
>
>         avg. packets per output batch: 2.81
>
>         idle cycles: 18540475978691 (98.75%)
>
>         processing cycles: 235616197026 (1.25%)
>
>         avg cycles per packet: 880536.76 (18776092175717/21323462)
>
>         avg processing cycles per packet: 11049.62 (235616197026/21323462)
>
> pmd thread numa_id 0 core_id 3:
>
>         packets received: 20654639
>
>         packet recirculations: 0
>
>         avg. datapath passes per packet: 1.00
>
>         emc hits: 4449782
>
>         megaflow hits: 7736708
>
>         avg. subtable lookups per megaflow hit: 1.00
>
>         miss with success upcall: 2567728
>
>         miss with failed upcall: 5900421
>
>         avg. packets per output batch: 3.00
>
>         idle cycles: 18531334403507 (98.69%)
>
>         processing cycles: 245349593515 (1.31%)
>
>         avg cycles per packet: 909078.29 (18776683997022/20654639)
>
>         avg processing cycles per packet: 11878.67 (245349593515/20654639)
>
> pmd thread numa_id 0 core_id 4:
>
>         packets received: 20430879
>
>         packet recirculations: 0
>
>         avg. datapath passes per packet: 1.00
>
>         emc hits: 4361365
>
>         megaflow hits: 8641516
>
>         avg. subtable lookups per megaflow hit: 1.00
>
>         miss with success upcall: 2175208
>
>         miss with failed upcall: 5252790
>
>         avg. packets per output batch: 2.86
>
>         idle cycles: 18547971632247 (98.79%)
>
>         processing cycles: 228120403283 (1.21%)
>
>         avg cycles per packet: 919005.59 (18776092035530/20430879)
>
>         avg processing cycles per packet: 11165.47 (228120403283/20430879)
>
> pmd thread numa_id 0 core_id 5:
>
>         packets received: 20891150
>
>         packet recirculations: 0
>
>         avg. datapath passes per packet: 1.00
>
>         emc hits: 4679834
>
>         megaflow hits: 8247466
>
>         avg. subtable lookups per megaflow hit: 1.00
>
>         miss with success upcall: 2350916
>
>         miss with failed upcall: 5612934
>
>         avg. packets per output batch: 2.92
>
>         idle cycles: 18540271879537 (98.74%)
>
>         processing cycles: 235820084225 (1.26%)
>
>         avg cycles per packet: 898758.18 (18776091963762/20891150)
>
>         avg processing cycles per packet: 11288.04 (235820084225/20891150)
>
> main thread:
>
>         packets received: 0
>
>         packet recirculations: 0
>
>         avg. datapath passes per packet: 0.00
>
>         emc hits: 0
>
>         megaflow hits: 0
>
>         avg. subtable lookups per megaflow hit: 0.00
>
>         miss with success upcall: 0
>
>         miss with failed upcall: 0
>
>         avg. packets per output batch: 0.00
>
> Thank you in advance for your time
>
> Rami Neiman
>
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20191126/3629147d/attachment-0001.html>


More information about the discuss mailing list