[ovs-discuss] How to set the best setting to test packet processing performance?

Hao Wu wuhao.thu at gmail.com
Sat Jul 18 23:35:02 UTC 2015


Hi Ben,

   I tried netperf. I run 40 netperf instances in 40 different hosts
connecting to one OVS. For every two host, I set one as the server and the
other one as the client that sends out packets. Thus, I have 20 pairs of
C/S. In this scenario, I get 1.27Mpps. I also tried to only hold one server
and let other 39 host send packets to it in parallel, but the throughput is
less than the former case. Which is the one you take in your setting? I
even add more hosts, say, 100 hosts. But Mininet crashes when OVS adds the
100 ports connecting to the 100 hosts.....

+++++++++++++++++
Best,
Hao

On Thu, Jul 16, 2015 at 1:14 PM, Ben Pfaff <blp at nicira.com> wrote:

> On Thu, Jul 16, 2015 at 01:12:49PM -0700, Joe Stringer wrote:
> > On 16 July 2015 at 13:08, Ben Pfaff <blp at nicira.com> wrote:
> > > On Thu, Jul 16, 2015 at 11:22:29AM -0700, Hao Wu wrote:
> > >>    Yes, you are right. I find the bottleneck is tcprelay which only
> > >> generates packets at 750Kpps. But even when I add more hosts to send
> > >> packets in parallel, I can't get a higher generation rate. E.g., if I
> use
> > >> only one host, tcpreplay sends packets at 300Kpps, while if I use 6
> hosts,
> > >> each tcpreplay sends packets at around 130Kpps and the total rate is
> still
> > >> 750Kpps. How do you get a higher generation rate in your experiment?
> Thanks.
> > >
> > > I think we used netperf.  I've never use tcpreplay so I don't have any
> > > hints.
> >
> > Even with netperf, you need to run several threads and depending on
> > the test you may exhaust all CPU running netperf to generate the
> > traffic before you start hitting OVS performance limits.
>
> Right, the OVS paper mentions that we ran 400 netperf instances in
> parallel.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20150718/0e27b9da/attachment-0002.html>


More information about the discuss mailing list