[ovs-discuss] Open vSwitch performance with UDP traffic

McGarvey, Kevin kmcGarvey at verisign.com
Wed Jan 22 20:17:05 UTC 2014



On 1/22/14 12:44 PM, "Ben Pfaff" <blp at nicira.com> wrote:

>On Wed, Jan 22, 2014 at 09:39:14AM -0800, Ben Pfaff wrote:
>> On Wed, Jan 22, 2014 at 05:35:40PM +0000, McGarvey, Kevin wrote:
>> > 
>> > 
>> > On 1/21/14 6:17 PM, "Ben Pfaff" <blp at nicira.com> wrote:
>> > >I'd expect a dramatic drop in CPU consumption in that case.  There
>>are
>> > >a few special cases where the upgrade wouldn't help.  One is if
>> > >in-band control is in use, another is if NetFlow is turned on, a
>>third
>> > >is if LACP bonds with L4 port based hashing are turned on, and there
>> > >are probably a few others that don't come to mind immediately.
>> > 
>> > I plan to rerun the test to rule out some mistake on my part.
>> > 
>> > Could you provide more information about the nature of the change
>>made in
>> > 1.11 that improves performance for this type of traffic?  Is the
>>kernel
>> > module able to forward UDP DNS packets without sending them to
>>userspace,
>> > or was it an optimization of the userspace processing?  What roughly
>>is
>> > the level of performance I should see?
>> 
>> In 1.11 and later, for simple OpenFlow tables (I don't think you
>> mentioned whether you are using a controller or which one), Open
>> vSwitch can set up only a single kernel flow that covers many possible
>> flows, for example all possible UDP destination ports, rather than
>> setting up an individual kernel flow for each UDP packet.  When that
>> works, it eliminates most of the kernel/userspace traffic, improving
>> performance.  Version 2.0 is better at analyzing OpenFlow flow tables
>> to see when this is possible, so it can better take advantage of the
>> ability.
>
>I see that I didn't answer your question about performance.
>
>When this optimization kicks in fully, I guess that the performance
>should be about the same as for traffic with long flows (like the
>netperf TCP_STREAM test, for example) in terms of packets per second.

Thanks.  This is encouraging.  The only question is why isn't the
optimization kicking in?


I repeated the test, and under a load of 10K DNS requests/responses per
second ovs-vswitchd is using 82% of a core.

I wasn't sure whether in-band control was on or off by default, so I
disabled it with the command below and restarted openvswitch, but the cpu
consumption didn't change:

ovs-vsctl set bridge <bridge> other-config:disable-in-band=true

I did not set up the configuration, but as far as I can tell Netflow is
not turned on.  The output of 'ovsdb-tool -show-log | grep -i netflow' is
empty.

There are no bonded interfaces.  The 2 NICs used for DNS traffic are
associated with separate bridges.

We are not using a controller.

In your response you mentioned that for simple OpenFlow tables Open
vSwitch can set up a single kernel flow that covers many possible flows.
I think this is exactly what I need.  Do I need to add a flow using
ovs-ofctl?  If so, what should the flow contain?  Both the source IP and
port change with every packet, so covering all possible ports won't be
sufficient.  I've been perusing the source code, so if there is a block of
code that analyzes the OpenFlow flow tables to determine whether Open
vSwitch can set up a single kernel flow, I'd be happy to look at it.




More information about the discuss mailing list