[ovs-discuss] Some more scaling test results...

Ryan Moats rmoats at us.ibm.com
Sat Feb 6 01:26:14 UTC 2016



Today, I stood up a five node openstack cloud on machines with 56 cores and
256GB of memory and ran a scaling test to see if I could
stamp out 8000 copies of the following pattern in a single project
(tenant):  n1 --- r1 --- n2 (in other words, create 8000 routers, 16000
networks, 16000 subnets, and 32000 ports).  Since both n1 and n2 had
subnets that were configured to use DHCP, the controller has 16000
namespaces and dnsmasq processes.  The controller was set up to use
separate processes to handle the OVN NB DB, OVN SB DB, and Openvswitch DBs

So, what happened?

The neutron log (q-svc.log) showed zero OVS DB timeouts, which means that
the ovsdb server process handling the NB OVN db could keep up with the
scale test.   Looking at the server at the end of the experiment, it was
using about 70GB of memory, with the top twenty occupancies being:

ovsdb-server process handling the OVN SB db at 25G
ovsdb-server process handling the vswitch DB at 2.7G
ovn-controller process at 879M
each of the 17 neutron-server processes at around 825M
(this totals up to slightly more than 42.5G)

For those interested, the OVSDB file sizes on disk are 138M for ovnsb.db,
14.9M for ovnnb.db and 18.4M for conf.db

Although I admit that this test didn't include the stress that putting a
bunch of ports onto a single network would create, but still, I'm of the
belief that if one uses separate ovsdb-server processes, then the long
poles in the tent become the SB OVN database and the processes that are
driven by it.

Have a great weekend,
Ryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20160205/60a620cd/attachment-0002.html>


More information about the discuss mailing list