[ovs-discuss] On TCP_CRR test setup in "Accelerating Open vSwitch to “Ludicrous Speed”"

Andy Zhou azhou at nicira.com
Sun Jul 19 05:00:35 UTC 2015


Hi, Yousong,

The 120K and 680K tps are not collected from the same test setups.
They are collected a few month apart and on different machines.

The 680K tps was collected using 3 machines. One runs netperf in
client mode, the middle one runs OVS and the third one runs netperf in
server mode.
The performance number is most limited by the 20Gbps link speed, as
there are more CPU head rooms in the OVS machine.

The cache optimization data was collected on a different set up which
is more limited in its ability to generate traffic (possibly due to
NIC type)
compare to the first setup.

It would have been great that we could have collected the numbers from
the exact setup. But it proves to be hard to have a monopoly use of
multiple powerful servers.
In the mean time, compare absolute TPS is not that useful anyways. In
both sets of numbers, OVS are not the bottle neck; The absolute number
is limited by the traffic
generating capabilities.

Since we reported CPU load on both sets of data, compare "% cpu" per
transaction may be more useful.

For OVS:

680K tps, with 161% cpu usage,  %cpu/ktps = 0.23%
120K tps, with   20% cpu usage, %cpu/ktps = 0.16%

The 2nd number is better because of further optimization in the
classifier documented in the paper.

On the other hand,  with Linux bridging:
680K tps with 48% cpu usage, %cpu/ktps = 0.07%.

This number should be considered a limit for OVS, since OVS has to
provide more features than Linux bridge.

Make sense?

Andy

On Tue, Jul 14, 2015 at 8:09 PM, Yousong Zhou <yszhou4tech at gmail.com> wrote:
> Hi
>
> On 15 July 2015 at 06:12, Ben Pfaff <blp at nicira.com> wrote:
>> On Fri, Jul 10, 2015 at 08:54:03PM +0800, Yousong Zhou wrote:
>>> On 8 July 2015 at 21:39, Yousong Zhou <yszhou4tech at gmail.com> wrote:
>>> > Hello, list
>>> >
>>> > I am doing some performance tests for the preparation of upgrading
>>> > Open vSwitch from 1.11.0 to 2.3.2.  However, with TCP_CRR, I can only
>>> > achieve about 130k tps (last time I got only 40k because of a .debug
>>> > type kernel), not even close to the reported 680k from the blog post
>>> > [0].  I also found other available reports [1, 2] but those results
>>> > were even worse and not consistent with each other.
>>> >
>>>
>>> Hi, I just found the 680k tps TCP_CRR test result in the nsdi2015
>>> paper "The design and implementation of Open vSwitch" [1].  Hmm, the
>>> 120k tps in section "Cache layer performance" is similar to what I
>>> have got.  But how were they boosted to 688k for both Linux bridge and
>>> Open vSwitch in section "Comparison to in-kernel switch"?
>>
>> I think that the configuration we used is described in that paper under
>> "Cache layer performance":
>>
>>     In all following tests, Open vSwitch ran on a Linux server with two
>>     8-core, 2.0 GHz Xeon processors and two Intel 10-Gb NICs. To generate
>>     many connections, we used Netperf’s TCP CRR test [25], which repeatedly
>>     establishes a TCP connection, sends and receives one byte of traffic,
>>     and disconnects.  The results are reported in transactions per second
>>     (tps).  Netperf only makes one connection attempt at a time, so we ran
>>     400 Netperf sessions in parallel and reported the sum.
>
> I already read that part.  The hardware configuration seems to be
> comparable [1].  In our tests 32 netperf instances were more or less
> enough to get the 130k tps and we increased the number of netperf
> pairs to 127 with no obvious improvement.
>
> But when we read that the performance can be as high as 680k with both
> Linux bridge and Open vSwitch, we thought there should be something we
> overlooked, e.g. system parameters tuning, kernel configuration.
>
> I noticed that in section "Cache layer performance" the best result
> was about 120k tps with all optimisations on.  But the result was more
> than 680k tps in section "Comparison to in-kernel switch".  How this
> boost was done?
>
> Thanks for the reply.
>
>  [1] https://github.com/yousong/brtest/blob/master/out/yousong-X540-AT2.md
>
>
> Regards
>
>                 yousong
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> http://openvswitch.org/mailman/listinfo/discuss



More information about the discuss mailing list