[ovs-discuss] High CPU Usage by ovs-vswitchd and resulting packet loss

Kaushal Shubhank kshubhank at gmail.com
Tue Jun 5 05:40:39 UTC 2012


Hello,

We have a simple setup in which a server running a transparent proxy needs
to intercept the http port 80 data. We have installed openvswitch (1.4.1)
in the same server (running Ubuntu-natty 2.6.38-12-server 64bit) to feed
the proxy with the corresponding type of packets while bridging all other
types of packets. The functionality is working properly but the CPU usage
is quite high (~30% for 20mbps traffic). The total load we need to deploy
on is around 350mbps, and as soon as we plug in, the CPU usage shoots up to
100% (on a quad core Intel(R) Xeon(R) CPU E5420  @ 2.50GHz), even when only
allowing all the packets to flow through br0. Packet loss also starts to
occur.

After reading similar discussions on previous threads I made my bridge *
stp-enabled* and increased the *flow-eviction-threshold* to "1000000".
Still the CPU load is high due to misses in kernel flow table. I have
defined only the following flows:

$ ovs-ofctl dump-flows br0

NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=80105.621s, table=0, n_packets=61978784,
n_bytes=7438892513, priority=100,tcp,in_port=1,tp_dst=80
actions=mod_dl_dst:00:e0:ed:15:24:4a,LOCAL
 cookie=0x0, duration=80105.501s, table=0, n_packets=49343241,
n_bytes=113922939324, priority=100,tcp,dl_src=00:e0:ed:15:24:4a,tp_src=80
actions=output:1
 cookie=0x0, duration=518332.577s, table=0, n_packets=3052099665,
n_bytes=2041603012562, priority=0 actions=NORMAL
 cookie=0x0, duration=80105.586s, table=0, n_packets=46209782,
n_bytes=109671221356, priority=100,tcp,in_port=2,tp_src=80
actions=mod_dl_dst:00:e0:ed:15:24:4a,LOCAL
 cookie=0x0, duration=80105.601s, table=0, n_packets=40389137,
n_bytes=5660094662, priority=100,tcp,dl_src=00:e0:ed:15:24:4a,tp_dst=80
actions=output:2

*where **00:e0:ed:15:24:4a is br0's MAC address*
*
*
$ ovs-dpctl show

system at br0:
lookups: hit:3105457869 missed:792488043 lost:903955 *{these lost packets
came with 350mbps load and do not change with 20mbps}*
flows: 12251
port 0: br0 (internal)
port 1: eth3
port 2: eth4

As far as we could understand, the missed packets here cause context switch
to user-mode and increase CPU usage. Let me know if any other detail about
the setup is required.

Is there anything else we can do to reduce CPU usage?
Can the flows above be improved in some way?
Is there any other configuration for deployment in production that we
missed?

Regards,
Kaushal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20120605/1aa08f88/attachment.html>


More information about the discuss mailing list