[ovs-dev] [PATCH for comment only] Allow datapath to pass packets back to the kernel for non-OVS handling

Chris Luke chrisy at flirble.org
Tue Dec 24 04:46:52 UTC 2013


Jesse Gross wrote (on Tue 24 Dec, 2013 at 03:05 GMT):
>On Mon, Dec 23, 2013 at 12:13 AM, Chris Luke <chrisy at flirble.org> wrote:
>> Open vSwitch handles the OPFF_NORMAL action by passing packets
>> into a simple layer 2 learning switch. This commit adds the option to have
>> packets passed back to the kernel as though Open vSwitch never touched
>> them. This allows, for instance, bridge member ports to have IP addresses
>> and for the host to run routing protocols on those ports.
>
>Can you give a full example of how you would use this in a hybrid setting?

It's pretty basic. Add a bunch of ports that already have configuration
on them (IP addresses) to a bridge, put the bridge in the normal-means-
send-it-back-to-the-kernel mode and install flows that match specific
things, like either the address space of the ports, or targeted protocols
such as ARP/OSPF/ISIS/LLDP/LDP/etc. I don't want any default learning switch
behaviours between these ports. The bridge is then just a patch panel
driven only by explicit flows.

What I am trying to primarily do is model layer 3 behaviors in a controller,
but more specifically an incremental approach to introducing OpenFlow to
an existing wide area IP network. This hypothetical network does not run
OVS (it's built with Cisco and Juniper and so on) but to model at scale I
need lots of nodes, hence the interest in OVS.

I do however also see value in the same methodology for more server-focused
uses, such as LLDP as you mentioned. And the other area is the rookie
mistake of adding your only port to a bridge whilst ssh'ed in and then
losing access to said machine and wondering why.

>I have some high level thoughts. Other people will probably have
>additional comments.
> * This type of concept has come up before and its usually in the
>context of allowing an existing daemon like lldpad to be run on an OVS
>port. At a minimum, we would need to make sure that whatever we do
>here is compatible with that and ideally we would be able to
>essentially solve both problems at the same time. If you are planning
>on running routing protocol daemons then maybe it is already pretty
>similar.

Exactly. I have lldpd running on my test setup and it appears to work
fine, but I'll dig into it to be sure.

> * The OpenFlow "normal" concept already is already pretty overloaded
>so I think it would be better if we didn't add more things to it. An
>explicit action would probably make it more programmable as well.

So, I originally played with a new action for the datapath to do this
but it felt like I was touching a lot of code to make it work which
is why I used a 'special' port number there.

If you are suggesting a new openflow action, I think that's a broader
conversation point. 'normal' in the OF protocol is imprecise, but it
seems to suggest that the device do whatever is 'normal' for it. OVS
took the approach of being a switch, whereas I want it to behave like 
a Linux machine normally would. I saw these as discrete modes of
operation, hence the approach of switching which behaviour is desired.

> * I'm not sure why we would need to remove any actions already
>applied to packets when we hit this action. It seems different from
>the OpenFlow action model.

I don't think I drop any actions with my code. If I indicated otherwise
then that was not intentional. In userspace it changes the behavior
of the OFPP_NORMAL action to generate a datapath flow with an output
action to the special 'normal' port. All other actions proceed as usual.

The datapath merely sets a flag when the special port is encountered
which instructs the hook to return the right value. The snag I came
across was that this meant always cloning the skb we're given. Having
since looked at clone_skb() it doesn't seem *that* expensive.

> * I agree that upcalls shouldn't work with this model - this seems
>like it could be issue to making it work seamlessly.

What suprised me is that I expected the first packet in a flow to
fail. The kernel datapath hands it to userspace and forgets about it.
Userspace does it's thing then programs a flow intp the DP and tries
to send the packet.  I expected this packet to get lost in the
bitbucket, but instead I see it responding as though 'normal' Linux
processing *did* happen.

I had ancitipated needing to do something to proactively program
flows into the datapath for this 'normal' action so it catches the
first packet. It may be I am seeing something else and this may yet
be required.

When I get a chance I'll do some profiling to properly identify what
is happening in this first-packet case.

Cheers,
Chris
-- 
== chrisy at flirble.org




More information about the dev mailing list