[ovs-discuss] ovs-vswitchd memory consumption behavior

Fernando Casas Schössow casasfernando at outlook.com
Sun Mar 3 22:53:17 UTC 2019


After doing some reading I'm wondering if instead of getting a core dump (which I already collected when the process got to around 700MB) it would be better to run OVS through Valgrind and share the log file.

What do you think? Any specific OVS or valgrind flags I should use?


________________________________
From: Fernando Casas Schössow <casasfernando at outlook.com>
Sent: Saturday, March 2, 2019 10:59 AM
To: Ben Pfaff
Cc: ovs-discuss at openvswitch.org; ovs-dev at openvswitch.org
Subject: Re: [ovs-discuss] ovs-vswitchd memory consumption behavior


Sorry for the dup but the mail client didn't add ovs-dev, so resending.

On sáb, mar 2, 2019 at 10:53 AM, Fernando Casas Schössow <casasfernando at outlook.com> wrote:
So I rebooted the server on Monday night, after around 5 days memory usage keeps growing continuously. Already over 500MB. You can clearly see the behavior in the attached graphs from Munin (rss size + meminfo for OVS process). Memory usage growths almost linearly in time.

Also OVS log shows the memory usage increasing over time:

2019-02-25T21:48:37+01:00 vmsvr01 ovs-vswitchd: ovs|00035|memory|INFO|12164 kB peak resident set size after 10.0 seconds
2019-02-25T21:48:37+01:00 vmsvr01 ovs-vswitchd: ovs|00036|memory|INFO|handlers:5 ports:7 revalidators:3 rules:261 udpif keys:36
2019-02-25T22:52:21+01:00 vmsvr01 ovs-vswitchd: ovs|00087|memory|INFO|peak resident set size grew 52% in last 3823.5 seconds, from 12164 kB to 18496 kB
2019-02-25T22:52:21+01:00 vmsvr01 ovs-vswitchd: ovs|00088|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:68
2019-02-26T00:38:57+01:00 vmsvr01 ovs-vswitchd: ovs|00128|memory|INFO|peak resident set size grew 51% in last 6395.8 seconds, from 18496 kB to 28000 kB
2019-02-26T00:38:57+01:00 vmsvr01 ovs-vswitchd: ovs|00129|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:28
2019-02-26T04:22:36+01:00 vmsvr01 ovs-vswitchd: ovs|00188|memory|INFO|peak resident set size grew 51% in last 13419.8 seconds, from 28000 kB to 42256 kB
2019-02-26T04:22:36+01:00 vmsvr01 ovs-vswitchd: ovs|00189|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:137
2019-02-26T09:19:01+01:00 vmsvr01 ovs-vswitchd: ovs|00344|memory|INFO|peak resident set size grew 51% in last 17784.3 seconds, from 42256 kB to 63640 kB
2019-02-26T09:19:01+01:00 vmsvr01 ovs-vswitchd: ovs|00345|memory|INFO|handlers:5 ports:21 revalidators:3 rules:517 udpif keys:123
2019-02-26T16:19:24+01:00 vmsvr01 ovs-vswitchd: ovs|01027|memory|INFO|peak resident set size grew 50% in last 25223.3 seconds, from 63640 kB to 95584 kB
2019-02-26T16:19:24+01:00 vmsvr01 ovs-vswitchd: ovs|2019-02-25T21:48:37+01:00 vmsvr01 ovs-vswitchd: ovs|00035|memory|INFO|12164 kB peak resident set size after 10.0 seconds
2019-02-25T21:48:37+01:00 vmsvr01 ovs-vswitchd: ovs|00036|memory|INFO|handlers:5 ports:7 revalidators:3 rules:261 udpif keys:36
2019-02-25T22:52:21+01:00 vmsvr01 ovs-vswitchd: ovs|00087|memory|INFO|peak resident set size grew 52% in last 3823.5 seconds, from 12164 kB to 18496 kB
2019-02-25T22:52:21+01:00 vmsvr01 ovs-vswitchd: ovs|00088|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:68
2019-02-26T00:38:57+01:00 vmsvr01 ovs-vswitchd: ovs|00128|memory|INFO|peak resident set size grew 51% in last 6395.8 seconds, from 18496 kB to 28000 kB
2019-02-26T00:38:57+01:00 vmsvr01 ovs-vswitchd: ovs|00129|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:28
2019-02-26T04:22:36+01:00 vmsvr01 ovs-vswitchd: ovs|00188|memory|INFO|peak resident set size grew 51% in last 13419.8 seconds, from 28000 kB to 42256 kB
2019-02-26T04:22:36+01:00 vmsvr01 ovs-vswitchd: ovs|00189|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:137
2019-02-26T09:19:01+01:00 vmsvr01 ovs-vswitchd: ovs|00344|memory|INFO|peak resident set size grew 51% in last 17784.3 seconds, from 42256 kB to 63640 kB
2019-02-26T09:19:01+01:00 vmsvr01 ovs-vswitchd: ovs|00345|memory|INFO|handlers:5 ports:21 revalidators:3 rules:517 udpif keys:123
2019-02-26T16:19:24+01:00 vmsvr01 ovs-vswitchd: ovs|01027|memory|INFO|peak resident set size grew 50% in last 25223.3 seconds, from 63640 kB to 95584 kB
2019-02-26T16:19:24+01:00 vmsvr01 ovs-vswitchd: ovs|01028|memory|INFO|handlers:5 ports:21 revalidators:3 rules:517 udpif keys:75
2019-02-27T01:08:23+01:00 vmsvr01 ovs-vswitchd: ovs|01096|memory|INFO|peak resident set size grew 50% in last 31739.4 seconds, from 95584 kB to 143632 kB
2019-02-27T01:08:23+01:00 vmsvr01 ovs-vswitchd: ovs|01097|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:44
2019-02-27T18:49:17+01:00 vmsvr01 ovs-vswitchd: ovs|01933|memory|INFO|peak resident set size grew 50% in last 63654.0 seconds, from 143632 kB to 215704 kB
2019-02-27T18:49:17+01:00 vmsvr01 ovs-vswitchd: ovs|01934|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:21
2019-02-28T17:26:30+01:00 vmsvr01 ovs-vswitchd: ovs|03182|memory|INFO|peak resident set size grew 50% in last 81432.6 seconds, from 215704 kB to 323680 kB
2019-02-28T17:26:30+01:00 vmsvr01 ovs-vswitchd: ovs|03183|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:62
2019-03-02T04:33:27+01:00 vmsvr01 ovs-vswitchd: ovs|04201|memory|INFO|peak resident set size grew 50% in last 126417.0 seconds, from 323680 kB to 485776 kB
2019-03-02T04:33:27+01:00 vmsvr01 ovs-vswitchd: ovs|04202|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:2801028|memory|INFO|handlers:5 ports:21 revalidators:3 rules:517 udpif keys:75
2019-02-27T01:08:23+01:00 vmsvr01 ovs-vswitchd: ovs|01096|memory|INFO|peak resident set size grew 50% in last 31739.4 seconds, from 95584 kB to 143632 kB
2019-02-27T01:08:23+01:00 vmsvr01 ovs-vswitchd: ovs|01097|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:44
2019-02-27T18:49:17+01:00 vmsvr01 ovs-vswitchd: ovs|01933|memory|INFO|peak resident set size grew 50% in last 63654.0 seconds, from 143632 kB to 215704 kB
2019-02-27T18:49:17+01:00 vmsvr01 ovs-vswitchd: ovs|01934|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:21
2019-02-28T17:26:30+01:00 vmsvr01 ovs-vswitchd: ovs|03182|memory|INFO|peak resident set size grew 50% in last 81432.6 seconds, from 215704 kB to 323680 kB
2019-02-28T17:26:30+01:00 vmsvr01 ovs-vswitchd: ovs|03183|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:62
2019-03-02T04:33:27+01:00 vmsvr01 ovs-vswitchd: ovs|04201|memory|INFO|peak resident set size grew 50% in last 126417.0 seconds, from 323680 kB to 485776 kB
2019-03-02T04:33:27+01:00 vmsvr01 ovs-vswitchd: ovs|04202|memory|INFO|handlers:5 ports:20 revalidators:3 rules:517 udpif keys:28

I can collect a dump of the process (using gcore) and then try to run your scripts to see if they help to get some information out of the dump but as I mentioned my debugging skills are very limited and from what I read in the other thread the scripts may not help without modifications.
For this reason I'm adding OVS dev mailing list to the thread in case any of the devs want to have a look at the dump and debug this issue.
If anyone wants access to the dump, reply to me and I will upload it to a server so it can be downloaded.

BTW, should I open an issue in github about this?

Thanks!

On dom, feb 17, 2019 at 2:28 AM, Fernando Casas Schössow <casasfernando at outlook.com> wrote:
OK, let's do this.
Up to the part of getting the process core dump, I'm familiar with (gcore PID).
The analysis part of the dump, I will have to learn it...but I hope your scripts will help at least in part.

I restarted OVS last night so it will take a couple of weeks to get to 1.5GB or so, then I will collect the dump and start the analysis.

Thanks in advance for helping with this Ben.






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190303/6fb842cd/attachment.html>


More information about the discuss mailing list