[ovs-dev] [PATCH v2 2/4] netdev-dpdk.c: Support multi-queue QoS rate limitation function for dpdk datapath

Stokes, Ian ian.stokes at intel.com
Tue Oct 17 08:52:57 UTC 2017


> This patch modifies function `egress_policer_run`, so that it can find
> which queue policy does the packet belongs to, then caculate whether the
> packet will be dropped, if not be dropped, need to caculate whether the
> packet will be dropped by port policy.

A few typos in the commit message above that should be cleaned up.

Will read better as follows:

This patch modifies function `egress_policer_run`, so that it finds which
queue policy a packet belongs to. It then caculates whether the packet
should be dropped. If the packet is not dropped at the queue level, it
will need to be calculated whether the packet will be dropped by port policy.

> 
> Rate limitation logicality as below:
> 1. Find which queue policy does the packet belongs to by `skb_priority` of
>    `struct dp_packet`, if found, goto 2 for queue policy QoS, else goto 3
>    for port policy QoS.
> 2. Use srtcm algorithm to limit rate according to queue policy, if return
>    color green, goto 3 for port policy QoS, else return false to drop
> packet.
> 3. Use srtcm algorithm to limit rate according to port policy, if return
> color
>    gree, return true to let packet go, else return false to drop packet.

A few typos above also.

A general observation but I think this behavior is better detailed in the vswitch.xml doc rather than a commit message. The current egress policer behavior is already described there so I think it's the correct place for it.

> 
> Finally, we can set `skb_priority` of `struct dp_packet` by openflow
> action set_queue.
> 
> $ ovs-ofctl add-flow br0 in_port=5,actions=set_queue:123,normal
> 
> Signed-off-by: zhaozhanxu <zhaozhanxu at 163.com>
> ---
>  lib/netdev-dpdk.c | 37 +++++++++++++++++++++++++++++++++++--
>  1 file changed, 35 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 089ad64..85f077e
> 100644
> --- a/lib/netdev-dpdk.c
> +++ b/lib/netdev-dpdk.c
> @@ -3173,14 +3173,47 @@ egress_policer_qos_is_equal(const struct qos_conf
> *conf,
>      return !memcmp(&params, &policer->app_srtcm_params, sizeof params);
> }
> 
> +static inline bool
> +egress_policer_pkt_handle(struct egress_policer *policer,
> +                          struct rte_mbuf *pkt, uint64_t time) {
> +    struct egress_queue_policer *queue_policer;
> +    uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt) - sizeof(struct
> +ether_hdr);
> +
> +    queue_policer = egress_policer_qos_find_queue(policer,
> +                                ((struct dp_packet *)pkt)-
> >md.skb_priority);
> +    if (queue_policer) {
> +        if (rte_meter_srtcm_color_blind_check(&queue_policer-
> >egress_meter,
> +                                        time, pkt_len) !=
> e_RTE_METER_GREEN) {
> +            return false;
> +        }
> +    }
> +
> +    return rte_meter_srtcm_color_blind_check(&policer->egress_meter,
> +                                        time, pkt_len) ==
> +e_RTE_METER_GREEN; }
> +
>  static int
>  egress_policer_run(struct qos_conf *conf, struct rte_mbuf **pkts, int
> pkt_cnt)  {
> -    int cnt = 0;
> +    int i = 0, cnt = 0;
> +    struct rte_mbuf *pkt = NULL;
> +    uint64_t current_time = rte_rdtsc();
>      struct egress_policer *policer =
>          CONTAINER_OF(conf, struct egress_policer, qos_conf);
> 
> -    cnt = netdev_dpdk_policer_run(&policer->egress_meter, pkts, pkt_cnt);
> +    for (i = 0; i < pkt_cnt; i++) {
> +        pkt = pkts[i];
> +        /* Handle current packet */
> +        if (egress_policer_pkt_handle(policer, pkt, current_time)) {
> +            if (cnt != i) {
> +                pkts[cnt] = pkt;
> +            }
> +            cnt++;
> +        } else {
> +            rte_pktmbuf_free(pkt);
> +        }
> +    }

Although I haven't tested yet, I suspect this will hit performance for a simple egress policer case, for example if a port policy is preferable then there is no need to do a per packet as the skb won be relevant, you could still just pass the array of packets rather than looking at a per queue policy first then a port policy. Do you have an perf figures for before and after with this patch for egress policing in terms of max rate limit attained?

> 
>      return cnt;
>  }
> --
> 2.7.4
> 
> 
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list