[ovs-dev] [PATCH] ofproto: Remove per-flow miss hash table from upcall handler.

Ryan Wilson 76511 wryan at vmware.com
Wed May 21 03:34:59 UTC 2014

I recently reproduced this today on servers109/110, using master branch with HEAD = 5a87054c2d832d0e10b30a1f223707acb8efbeb7. This commit is from yesterday, so it includes your fix (73a3c4757e596ff156d40f41496a0264373e5bc4).


From: Joe Stringer <joestringer at nicira.com<mailto:joestringer at nicira.com>>
Date: Tuesday, May 20, 2014 7:06 PM
To: Ryan Wilson <wryan at vmware.com<mailto:wryan at vmware.com>>
Cc: Alex Wang <alexw at nicira.com<mailto:alexw at nicira.com>>, "dev at openvswitch.org<mailto:dev at openvswitch.org>" <dev at openvswitch.org<mailto:dev at openvswitch.org>>, Ryan Wilson <wryan at nicira.com<mailto:wryan at nicira.com>>
Subject: Re: [ovs-dev] [PATCH] ofproto: Remove per-flow miss hash table from upcall handler.

On 20 May 2014 17:25, Ryan Wilson <wryan at vmware.com<mailto:wryan at vmware.com>> wrote:
Ok turns out my Openflow rules weren't totally correct (they were flooding all ports like a hub instead of forwarding properly). After adjusting them, I achieved equivalent performance with and without my upcall patch (both achieved 161-162 trans/second). I'll submit my other version of the patch.

I also took a closer look at the ovs-vswitch.log and saw this error occasionally when running with the up call patch:

2014-05-19T21:21:23.240Z|00014|dpif(revalidator97)|WARN|system at ovs-system: failed to flow_del (No such file or directory) dp_hash(0),recirc_id(0),skb_priority(0),in_port(4),skb_mark(0),eth(src=a0:36:9f:33:3a:c0,dst=a2:2e:02:45:b6:14),eth_type(0x0800),ipv4(src=,dst=,proto=6,tos=0,ttl=64,frag=no),tcp(src=54622,dst=41606),tcp_flags(0x010)

Hmm, that's odd. What version of userspace and what version of kernelspace were you running?

I pushed some patches a couple of days ago that should prevent this.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20140521/1bcfc04e/attachment-0005.html>

More information about the dev mailing list