[ovs-dev] [PATCH v4] Upcall/Slowpath rate-limiter for OVS
aconole at redhat.com
Tue Jun 12 19:31:05 UTC 2018
Jan Scheurich <jan.scheurich at ericsson.com> writes:
>> Have you considered making this token bucket per-port instead of
>> per-pmd? As I read it, a greedy port can exhaust all the tokens from a
>> particular PMD, possibly leading to an unfair performance for that PMD
>> thread. Am I just being overly paranoid?
>> [manu] Yes, this is possible. But it can happen for both fast and
>> slowpath today, as PMDs sequentially iterate through ports. In order
>> to keep it simple, its done per-PMD. It can be extended to per-port if needed.
> The purpose of the upcall rate limiter for the netdev datapath is to
> protect a PMD from becoming clogged down by having to process an
> excessive number of upcalls. It is not to police the number of upcalls
> per port to some rate, especially not across multiple PMDs (in the
> case of RSS).
Okay. I guess you would first create this to police the global upcall
pool, and then if you need a future policing per-port, you would add
that (something like you indicate at the end of this mail)?
> I think what you are after, Aaron, is some kind of fairness scheme
> that provides each rx queue with a minimum rate of upcalls even if the
> global PMD rate limit is reached? I don't believe simply partitioning
> the global PMD rate limit into a number of smaller rx queue buckets
> would be a good solution. But I don't have a better alternative
Yes, that's what I'm thinking about. I'm concerned about the
scalability from the kernel side for the number of upcall fds required
for the kernel case. I guess that if I'm going to try and propose a
solution, I would want it to match with any existing userspace datapath
solution (at least, semantically) if it could.
> I agree with Manu that it should not stop us implementing the
> PMD-level protection. We can add a fairness scheme later, if needed.
Okay - I'm interested in that fairness for other reasons. Maybe I'll
cook up the patches and see what comes from it.
> BR, Jan
More information about the dev