[ovs-discuss] OVN Database sizes - Auto compact feature

Daniel Alvarez Sanchez dalvarez at redhat.com
Wed Mar 7 13:48:52 UTC 2018

BTW, I didn't spot any of these messages in the log:


I'll add a few traces to figure out why the auto compact is not triggering.

Also, I could see the trace when I ran it manually:
2018-03-07T13:32:21.672Z|00021|ovsdb_server|INFO|compacting OVN_Southbound
database by user request
compacting database online (1519124364.908 seconds old, 951 transactions)

On Wed, Mar 7, 2018 at 2:40 PM, Daniel Alvarez Sanchez <dalvarez at redhat.com>

> Hi folks,
> During the performance tests I've been doing lately I noticed
> that the size of the Southbound database was around 2.5GB
> in one of my setups. I couldn't dig further then but now I
> decided to explore a bit more and these are the results in
> my all-in-one OpenStack setup using OVN as a backend:
> * Created 800 ports on the same network (logical switch).
> * Deleted those 800 ports.
> * I logged the DB sizes for both NB and SB databases every second.
> See attached image for the results.
> At around x=2000, the creation task finished and deletion starts.
> As you can see, there's automatic compact happening in the
> NB database across the whole test. However, while I was deleting
> ports, the SB database stop shrinking and keeps growing.
> After the test finished, the DB sizes remaining the same
> thus SB database being around 34MB. It was not until I
> manually compacted it when it finally shrinked:
> [stack at ovn ovs]$ ls -alh ovnsb_db.db
> -rw-r--r--. 1 stack stack 34M Mar  7 12:04 ovnsb_db.db
> [stack at ovn ovs]$ sudo ovs-appctl -t /usr/local/var/run/openvswitch/ovnsb_db.ctl
> ovsdb-server/compact
> [stack at ovn ovs]$ ls -alh ovnsb_db.db
> -rw-r--r--. 1 stack stack 207K Mar  7 13:32 ovnsb_db.db
> I'll try to investigate further in the code.
> Thanks,
> Daniel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180307/134068c4/attachment.html>

More information about the discuss mailing list