[ovs-discuss] ovs-vswitchd 2.0 has high cpu usage

Chengyuan Li chengyuanli at gmail.com
Sat Nov 23 07:24:17 UTC 2013


Hi Ben,

Do you have suggested max threads number?

For the the upcall dispatcher, currently it's using hash function to find a
miss-handler, it's some kind of load balance selection for all the threads.
Will it be possible to select one miss-handler always, and only the
miss-handler's load/cpu usage is above a threshold, then select the second
one, and then the third one, ...?

Regards,
CY.


On Fri, Nov 22, 2013 at 11:40 PM, Ben Pfaff <blp at nicira.com> wrote:

> On Fri, Nov 22, 2013 at 03:30:54PM +0800, Chengyuan Li wrote:
> > I'm testing ovs 2.0, bridged mode, configed miss-handlers with 28-thread
> > and 4-thread respectively, sending the same pattern (short-live
> connection)
> > and  same amount traffic  to the VM running on this vswitch, 28-thread
> > configuration consume much more cpu than 4-thread, but the traffic volume
> > should be far from ovs-vswitchd's max capacity.
> >
> > - 28-thread
> > ovs-vswitchd cpu usage: 2741%
> > kernel missed packets: 130646/sec
> > host throughput total pps 139679/sec
> >
> > - 4-thread
> > ovs-vswitchd cpu usage: 510%
> > kernel missed packets: 130726/sec
> > host throughput total pps 135715/sec
> >
> > The perf shows that 70% cycles are consumed in __ticket_spin_lock() in
> > 28-thread case. Further perf lock shows that futex_queues lock contention
> > is very heavy. So it mean that pthread_mutex_lock() in ovs-vswitchd
> trigger
> > the high cpu usage of spin_lock in the kernel.
> >
> > Is this a known issue of kernel futex or ovs-vswitchd can improve?
>
> ovs-vswitchd can definitely improve and we are in the midst of that
> work.  For now I'd suggest using only a small number of threads.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20131123/b4d20cbd/attachment.html>


More information about the discuss mailing list