[ovs-discuss] Megaflow Inspection
matan129 at gmail.com
Tue Jan 7 22:52:52 UTC 2020
Running oproto/trace unfortunately does not explain why OVS chose to look
at these fields.
Using the same setup, for example:
# ovs-appctl ofproto/trace br0
0. in_port=4, priority 32768
Final flow: unchanged
Datapath actions: 3
It seems that the OpenFlow rule (not to be confused with the megaflow
entry) was correctly identified, and no other actions take place.
Since the relevant OpenFlow rule has nothing to do with the IP layer, I
don't understand why the megaflow is aware of it.
I'll try to look at the classifier/megaflow code (?) tomorrow, but I'd like
to know if there's a high-level way to avoid such trouble.
On Wed, 8 Jan 2020 at 00:39, Ben Pfaff <blp at ovn.org> wrote:
> On Tue, Jan 07, 2020 at 10:44:57PM +0200, Matan Rosenberg wrote:
> > Acutally, I do think I have a megaflow (or other caching) issue.
> > We use OVS for L2 packet forwarding; that is, given a packet, we don't
> > OVS to look at other protocols beyond the Ethernet layer.
> > Additionally, we use VXLAN to establish L2 overlay networks across
> > OVS servers.
> > Just to make thing clear, these are some typical flow rules that you
> > see on a bridge:
> > - in_port=1,actions=2,3
> > - in_port=42,actions=FLOOD
> > - actions=NORMAL
> > No IP matching, conntrack, etc.
> > We're experiencing severe performance issues with OVS - in this use case,
> > it cannot handle more than couple thousand packets/s.
> > After some exploring, I've noticed that the installed megaflows try to
> > match on fields that are not present in the rules, apparently for no
> > Here's a complete example to reproduce, using OVS 2.12.0:
> > # ip link add dev a-blue type veth peer name a-red
> > # ip link add dev b-blue type veth peer name b-red
> > # ovs-vsctl add-br br0
> > # ovs-vsctl add-port br0 a-blue
> > # ovs-vsctl add-port br0 b-blue
> > # ovs-ofctl del-flows br0
> > # ovs-ofctl add-flow br0 in_port=a-blue,actions=b-blue
> > # ovs-ofctl add-flow br0 in_port=b-blue,actions=a-blue
> > After injecting ~100 random packets (IP, IPv6, TCP, UDP, ARP with random
> > addresses) to one of the red interfaces (with
> > these are the installed flows:
> > # ovs-dpctl dump-flows
> > recirc_id(0),in_port(2),eth(),eth_type(0x0806), packets:54, bytes:2268,
> > used:1.337s, actions:3
> > recirc_id(0),in_port(2),eth(),eth_type(0x86dd),ipv6(frag=no), packets:28,
> > bytes:1684, used:1.430s, flags:S, actions:3
> > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(frag=no), packets:15,
> > bytes:610, used:1.270s, flags:S, actions:3
> > As you can see, for some reason, OVS had split the single relevant
> > rule to three separate megaflows, one for each eth_type (and even other
> > fields - IP fragmentation?).
> > In my production scenario, the packets are even more diversified, and we
> > see OVS installing flows which match on even more fields, including
> > specific Ethernet and IP addresses.
> > This leads to a large number of flows that have extremely low hit rate -
> > each flow handles not more than ~100 packets (!) during its entire
> > We suspect that this causes the performance peanalty; either
> > 1) The EMC/megaflow table is full, so vswitchd upcalls are all over the
> > place, or
> > 2) The huge number of inefficient megaflows leads to terrible lookup
> > in the in-kernel megaflow table itslef (due to large number of masks,
> > In short: how can I just make OVS oblivious to these fields? Why does it
> > try to match on irrlevant fields?
> I can see how this would be distressing.
> You can use ofproto/trace with a few examples to help figure out why OVS
> is matching on more fields than you expect.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the discuss