[ovs-dev] [PATCH 5/5] dpif-linux: Prevent a single port from monopolizing upcalls.

Jesse Gross jesse at nicira.com
Thu Sep 22 00:52:19 UTC 2011


On Tue, Sep 20, 2011 at 4:00 PM, Pravin Shelar <pshelar at nicira.com> wrote:
> On Mon, Sep 19, 2011 at 3:00 PM, Jesse Gross <jesse at nicira.com> wrote:
>> Currently it is possible for a client on a single port to generate
>> a huge number of packets that miss in the kernel flow table and
>> monopolize the userspace/kernel communication path.  This
>> effectively DoS's the machine because no new flow setups can take
>> place.  This adds some additional fairness by separating each upcall
>> type for each object in the datapath onto a separate socket, each
>> with its own queue.  Userspace then reads round-robin from each
>> socket so other flow setups can still succeed.
>>
>> Since the number of objects can potentially be large, we don't always
>> have a unique socket for each.  Instead, we create 16 sockets and
>> spread the load around them in a round robin fashion.  It's theoretically
>> possible to do better than this with some kind of active load balancing
>> scheme but this seems like a good place to start.
>
> I am not sure why you are using different ports for flow related
> upcalls. Due to this round-robin assignment, of upcall socket to
> vport, is looking more like random socket assignment.
>
> DoS is happening due to missed packets. OVS does not have control over
> those miss calls. so it make sense to have separate queue for each
> vport.
> But upcalls related to flows can be (somewhat) managed by OVS e.g. in
> case of sFlow upcalls it is sampling rate and in case of uperspace
> upcalls it is controller action itself.
>
> So we can have one socket per DP for flow related upcalls.
> If you think controller action can generate similar DoS then maybe we
> can use same vport upcall socket to send flow related upcalls from
> given vport. I think it will perform better in case of DoS using flow
> related upcalls as traffic from one vport (VM) from all flows is going
> to one socket rather than going towards most of them and again
> monopolizing upcalls.
>
> With this approach we can have much better control on how we are
> assigning (limited) upcall sockets to vport upcall traffic. It can
> help to do load balancing in future.
> plus it will simplify upcall code and flows parameters.

>From an overall architecture perspective, I want to give userspace the
power to make policy decisions about where upcalls are directed.
Fixing it to be one socket per datapath for flow upcalls or making it
the same as the miss upcalls implies a certain problem that you're
solving and therefore a degree of control is lost by userspace.  In
fact, it's actually possible to implement both of those policies
currently with just userspace changes.  Some other things that you
might want to implement are giving higher priority to explicit
userspace upcalls over misses or partitioning of userspace into
multiple processes/threads.  The current implementation can be
extended to do that as well, so I like its flexibility.

As far as actually implementing one of the policies that you mention,
they're both reasonable.  I basically picked the simplest and most
generic algorithm as a starting point.  It worked well in my testing
and I figure that we can always do something more complicated if we
have a specific problematic use case.



More information about the dev mailing list