[ovs-discuss] Some more scaling test results...

Andy Zhou azhou at ovn.org
Thu Feb 11 22:12:06 UTC 2016


On Thu, Feb 11, 2016 at 7:28 AM, Ryan Moats <rmoats at us.ibm.com> wrote:

> I had reason to recreate this experiment and here's what I found after it
> completed...
>
> NB db:
>
> [root at oc-syd01-prod-compute-110 ~]# ovs-appctl -t
> /usr/local/var/run/openvswitch/ovsdb-server.53752.ctl memory/show
> cells:534698 monitors:1 sessions:17
>
> SB db:
>
> [root at oc-syd01-prod-compute-110 ~]# ovs-appctl -t
> /usr/local/var/run/openvswitch/ovsdb-server.53754.ctl memory/show
> backlog:563735228 cells:4140112 monitors:2 sessions:6
>
> Ryan
>
> Thanks, this is very useful information. It shows SB db being the
bottleneck, at a relatively low session count.
Liran's "monitor_cond" patch may help, not sure how much.  It may be worth
retesting once it is in and being used.
Longer term, multithreading ovsdb-server will allow more CPU cores to be
used for processing the backlog.

> "discuss" <discuss-bounces at openvswitch.org> wrote on 02/08/2016 03:19:58
> PM:
>
> > From: Ryan Moats/Omaha/IBM at IBMUS
> > To: Andy Zhou <azhou at ovn.org>
> > Cc: discuss at openvswitch.org
> > Date: 02/08/2016 03:20 PM
> > Subject: Re: [ovs-discuss] Some more scaling test results...
> > Sent by: "discuss" <discuss-bounces at openvswitch.org>
> >
> > Andy Zhou <azhou at ovn.org> wrote on 02/08/2016 01:54:06 PM:
> >
> > > From: Andy Zhou <azhou at ovn.org>
> > > To: Ryan Moats/Omaha/IBM at IBMUS
> > > Cc: discuss at openvswitch.org
> > > Date: 02/08/2016 01:54 PM
> > > Subject: Re: [ovs-discuss] Some more scaling test results...
> > >
> > > On Fri, Feb 5, 2016 at 5:26 PM, Ryan Moats <rmoats at us.ibm.com> wrote:
> > > Today, I stood up a five node openstack cloud on machines with 56
> > > cores and 256GB of memory and ran a scaling test to see if I could
> > > stamp out 8000 copies of the following pattern in a single project
> > > (tenant): n1 --- r1 --- n2 (in other words, create 8000 routers,
> > > 16000 networks, 16000 subnets, and 32000 ports). Since both n1 and
> > > n2 had subnets that were configured to use DHCP, the controller has
> > > 16000 namespaces and dnsmasq processes. The controller was set up to
> > > use separate processes to handle the OVN NB DB, OVN SB DB, and
> > > Openvswitch DBs
> > >
> > > So, what happened?
> > >
> > > The neutron log (q-svc.log) showed zero OVS DB timeouts, which means
> > > that the ovsdb server process handling the NB OVN db could keep up
> > > with the scale test. Looking at the server at the end of the
> > > experiment, it was using about 70GB of memory, with the top twenty
> > > occupancies being:
> > >
> > > ovsdb-server process handling the OVN SB db at 25G
> > > ovsdb-server process handling the vswitch DB at 2.7G
> > > ovn-controller process at 879M
> > > each of the 17 neutron-server processes at around 825M
> > > (this totals up to slightly more than 42.5G)
> > >
> > > For those interested, the OVSDB file sizes on disk are 138M for
> > > ovnsb.db, 14.9M for ovnnb.db and 18.4M for conf.db
> > >
> > > Although I admit that this test didn't include the stress that
> > > putting a bunch of ports onto a single network would create, but
> > > still, I'm of the belief that if one uses separate ovsdb-server
> > > processes, then the long poles in the tent become the SB OVN
> > > database and the processes that are driven by it.
> > >
> > > Have a great weekend,
> > > Ryan
> > >
> > > Thanks for sharing.
> > >
> > > May I know how many connections do ovsdb-server SB host?  On a live
> > > system, you can find out by typing:  "ovs-appctl -t ovsdb-server
> >  memory/show"
> >
> > Unfortunately, the experiment has been torn down to allow others to
> > run, so I can no longer provide that information...
> >
> >
> > _______________________________________________
> > discuss mailing list
> > discuss at openvswitch.org
> > http://openvswitch.org/mailman/listinfo/discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20160211/e39e389e/attachment-0002.html>


More information about the discuss mailing list