[ovs-discuss] ovs-vswitchd is sucking every cycle (100% CPU usage).

John Chludzinski john.chludzinski at vivaldi.net
Thu Sep 15 22:06:00 UTC 2016


I created a bond-port (and enabled LACP):

~# ovs-vsctl add-br b1
~# ovs-vsctl add-bond b1 bd1 enp8s0 enp0s26u1u3u1
~# ovs-vsctl set port bd1 lacp=active

On a CISCO router I connected interfaces: enp8s0 & enp0s26u1u3u1
to ports 2 & 3.

On the CISCO routes I enabled LACP on ports 2 & 3.

Then I ran 'top' and ovs-vswitchd was sucking every cycle (100% CPU 
usage).

I ran:

~# tail -f /var/log/openvswitch/ovs-vswitchd.log

...

2016-09-16T01:54:10.812Z|00408|poll_loop(revalidator12)|INFO|Dropped 
24296 log messages in last 6 seconds (most recently, 0 seconds ago) due 
to excessive rate
2016-09-16T01:54:10.812Z|00409|poll_loop(revalidator12)|INFO|wakeup due 
to [POLLIN] on fd 43 (FIFO pipe:[18987]) at 
ofproto/ofproto-dpif-upcall.c:899 (66% CPU usage)
2016-09-16T01:54:16.890Z|00410|poll_loop(revalidator12)|INFO|Dropped 
20991 log messages in last 6 seconds (most recently, 1 seconds ago) due 
to excessive rate
2016-09-16T01:54:16.890Z|00411|poll_loop(revalidator12)|INFO|wakeup due 
to 500-ms timeout at ofproto/ofproto-dpif-upcall.c:898 (55% CPU usage)
2016-09-16T01:54:22.812Z|00412|poll_loop(revalidator12)|INFO|Dropped 
24745 log messages in last 6 seconds (most recently, 0 seconds ago) due 
to excessive rate
2016-09-16T01:54:22.812Z|00413|poll_loop(revalidator12)|INFO|wakeup due 
to [POLLIN] on fd 43 (FIFO pipe:[18987]) at lib/ovs-thread.c:304 (79% 
CPU usage)
2016-09-16T01:54:28.812Z|00414|poll_loop(revalidator12)|INFO|Dropped 
24821 log messages in last 6 seconds (most recently, 0 seconds ago) due 
to excessive rate
2016-09-16T01:54:28.812Z|00415|poll_loop(revalidator12)|INFO|wakeup due 
to [POLLIN] on fd 43 (FIFO pipe:[18987]) at lib/ovs-thread.c:304 (66% 
CPU usage)
2016-09-16T01:54:34.812Z|00043|poll_loop(revalidator13)|INFO|Dropped 
26469 log messages in last 6 seconds (most recently, 0 seconds ago) due 
to excessive rate

...

Repeated again, and again, and ...


---John



More information about the discuss mailing list