[ovs-discuss] HIgh CPU usage for ovs-vswitchd with flows 3069 and lost: 267425491

kevin parker kevin.parker092 at gmail.com
Tue May 28 06:31:58 UTC 2013


Thank you Justin,
                        Could you please tell me how can i regenerate these
many number of flows for testing so that i can configure threshold based on
that,when i tried iperf with 1B and udp flow didn't increase and there were
no packet loss.Is ovs-controller generate flow based on packet-type src and
dst? Because in my case i am able to see only single flow for my udp
traffic.

Regards,
kevin


On Mon, May 27, 2013 at 10:17 PM, Justin Pettit <jpettit at nicira.com> wrote:

> I'm guessing the traffic is bursty or ovs-vswitchd was busing doing other
> work and the queues overflowed.
>
> --Justin
>
>
> On May 27, 2013, at 9:34 AM, kevin parker <kevin.parker092 at gmail.com>
> wrote:
>
> > Justin,
> >          Even at 85-90% CPU usage i am seeing increased lost
> count,initially i faced high lost count when ovs-vswitchd was at 100%.So
> what can be the reason for dropped packet even with 10% free cpu available.
> >
> > regards,
> > kevin
> >
> >
> > On Mon, May 27, 2013 at 9:40 PM, Justin Pettit <jpettit at nicira.com>
> wrote:
> > We've made a lot of improvements in flow set up rate since version 1.4,
> so upgrading to a more current version (we're on 1.10 now) will likely
> help.  We're currently working on multithreading the OVS userspace and
> adding support for wildcarded flows in the kernel, which should
> substantially improve flow set up.
> >
> > --Justin
> >
> >
> > On May 27, 2013, at 12:59 AM, kevin parker <kevin.parker092 at gmail.com>
> wrote:
> >
> > > Hi,
> > >
> > >      Running OVS 1.4 on xenserver 6.0.2 , but its taking very high cpu
> some times ~100%.
> > >
> > > ovs-dpctl show
> > >
> > > system at xenbr5:
> > >       lookups: hit:2560723 missed:3742809 lost:0
> > >       flows: 5
> > >       port 0: xenbr5 (internal)
> > >       port 1: eth5
> > > system at xapi2:
> > >       lookups: hit:1660559495 missed:1241428 lost:0
> > >       flows: 11
> > >       port 0: xapi2 (internal)
> > >       port 1: eth7
> > >       port 2: eth6
> > > system at xenbr4:
> > >       lookups: hit:2539909 missed:3729876 lost:0
> > >       flows: 5
> > >       port 0: xenbr4 (internal)
> > >       port 1: eth4
> > > system at xapi3:
> > >       lookups: hit:20443295213 missed:26596588140 lost:267425491
> > >       flows: 3069
> > >       port 0: xapi3 (internal)
> > >       port 1: eth1
> > >       port 2: eth0
> > >       port 4: xapi4 (internal)
> > >       port 15: vif12.0
> > >       port 18: vif14.0
> > > system at xenbr2:
> > >       lookups: hit:1634980795 missed:166104910 lost:0
> > >       flows: 127
> > >       port 0: xenbr2 (internal)
> > >       port 1: eth2
> > > system at xenbr3:
> > >       lookups: hit:2450949145 missed:81360495 lost:0
> > >       flows: 118
> > >       port 0: xenbr3 (internal)
> > >       port 1: eth3
> > >       port 2: xapi6 (internal)
> > >       port 6: vif12.1
> > >       port 8: vif14.1
> > >
> > > Network usage:
> > >
> > > dstat -n
> > >
> > > -net/total-
> > >  recv  send
> > > 6475k 5736k
> > > 6575k 5646k
> > > 6767k 6347k
> > >
> > > Can some one please tell me how this can be fixed.
> > >
> > > Regards,
> > > Kevin
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > discuss mailing list
> > > discuss at openvswitch.org
> > > http://openvswitch.org/mailman/listinfo/discuss
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20130528/134f7ba2/attachment.html>


More information about the discuss mailing list