[ovs-dev] [PATCH per-port ingress scheduling 0/2]

Billy O'Mahony billy.o.mahony at intel.com
Tue Aug 28 13:58:21 UTC 2018


Hi All,

I've updated the patch to account for two sets of comments on the RFCv2 see
history below.

This patch set implements the 'preferential read' part of the feature of
ingress scheduling described at OvS 2017 Fall Conference
https://www.slideshare.net/LF_OpenvSwitch/lfovs17ingress-scheduling-82280320.

It allows configuration to specify an ingress priority for an entire
interface. This protects traffic on higher priority interfaces from loss and
latency as PMDs get overloaded.

Results for physical interfaces are excellent - higher priority ports suffer
much less loss:

Phy i/f:       |dpdk_0 dpdk_1 dpdk_2 dpdk_3
% Total Load   |   25%    25%    25%    25%
Priority (3=Hi)|     0      1      2      3
---------------+---------------------------
Total Offered  |
Load (kpps)    |         Pkt Loss (kpps)
-------------------------------------------
2100           |     0      0      0      0
2300           |    23      0      0      0
2500           |   308      0      0      0
2900           |   628     24      0      0
3400           |   811    370      8      0
3500           |   821    391     52      0
4000           |   964    565    238     20

This also holds true to a great extent when the 'priority' port is carrying
most of the traffic:

Phy i/f:       |dpdk_0 dpdk_1 dpdk_2 dpdk_3
% Total Load   |   10%    20%    30%    40%
Priority (3=Hi)|     0      1      2      3
---------------+---------------------------
Total Offered  |
Load (kpps)    |         Pkt Loss (kpps)
-------------------------------------------
2300           |     8      0      0      0
2500           |   181      0      0      0
2550           |   213     13      0      0
2620           |   223     63      0      9
2700           |   230     82     10     52
3000           |   262    143    101    172
3500           |   310    242    249    370
4000           |   361    341    398    569

For vhostuser ports VMs running iperf3 (TCP) benefit to an appreciable extent
from being on a 'priority' ports - without a drop
in overall throughput.

Scenario: 3 VM-pairs running iperf3 (baseline)
---------------------------------------------
VM pair      | 1,2    3,4    5,6
priority     |   0      0      0
Tput (Gbit/s)| 3.3    3.3    3.3

Scenario: 3 VM-pairs running iperf3 (one pair prioritized)
----------------------------------------------------------
VM pair      | 1,2    3,4    5,6
priority     |   0      0      0
Tput (Gbit/s)| 2.7    2.7    4.6

History:

v1:
* the configuration in only in dpif-netdev and will work with any polled
  netdev's not just dpdk netdevs.
* re-configuration of the priorities at run-time is supported.
* keep configuration in Interfaces other_config
* applies cleanly on 9b4f08c

RFCv2:
* Keep ingress prio config in netdev base rather than in each netdev type.
* Account for differing rxq lengths
* Applies cleanly to 4299145

RFCv1:
Initial version.


Billy O'Mahony (2):
  ingress scheduling: documentation
  ingress scheduling: Provide per interface ingress priority

 Documentation/howto/dpdk.rst    |  15 ++++
 include/openvswitch/ofp-parse.h |   3 +
 lib/dpif-netdev.c               | 188 +++++++++++++++++++++++++++++++++-------
 lib/netdev-dpdk.c               |  10 +++
 vswitchd/vswitch.xml            |  15 ++++
 5 files changed, 200 insertions(+), 31 deletions(-)

-- 
2.7.4



More information about the dev mailing list