[ovs-dev] MPLS and VLAN QinQ patch

ravi kerur rkerur at gmail.com
Fri Jul 6 04:45:47 UTC 2012


On Thu, Jul 5, 2012 at 7:37 PM, Jesse Gross <jesse at nicira.com> wrote:

> On Thu, Jul 5, 2012 at 7:48 AM, ravi kerur <rkerur at gmail.com> wrote:
> > I understand code review is time consuming and please note it's a bit
> time
> > consuming to me as well as I have to retest/rebase everytime and I am not
> > sure whether the amount of time I am spending on this is justifiable?
>
> Reviewing this code is frustrating because you don't really take into
> account the comments that I make.  There are changes along the lines
> that I ask but they generally don't address the heart of the matter.
> Until that changes, this isn't really a good use of either of our
> time.
>
> I would focus less on writing and sending out new code and more time
> on really understanding the problems at hand.  This isn't just in
> relationship to the comments that I've made - your upstream patch for
> emulation of offloading doesn't  even follow the pattern of existing
> code (for example, how do you know when to emulate?).
>

<rk> upstream kernel has offload code already present, there is nothing
mpls or qinq special handling that needs to be added. Only thing that needs
to be addressed is to get underneath l3/l4 protocol info such that correct
gso handler is called and offload handled correctly. I don't send random
code for review, I always send tested code and upstream code has been
tested against kernel 3.5. I don't make changes just because I am asked to
make changes, I do it if that's the right approach.

>
> > MPLS actions are implemented as per section 4.8 in OF 1.1 specification,
> > excerpts below
> >
> > "The Apply-Actions instruction and the Packet-out message include an
> action
> > list. The semantic of the
> > action list is identical to the OpenFlow 1.0 specification. The actions
> of
> > an action list are executed in the
> > order specified by the list, and are applied immediately to the packet."
>
> As I mentioned before, we're talking about the kernel API, not the
> OpenFlow one.
>

<rk> I am giving the reason how mpls actions are designed and why set
operations doesn't fit/work.

>
> > I looked at implementing ttl actions as set operations and didn't find it
> > suitable as every ttl action is applied immediately and it won't
> integrate
> > well with other mpls actions(push/pop and set label) which are applied
> > immediately. Please note there can be more than 2 mpls push and pop
> actions
> > configured.
>
> Supporting additional levels of labels is a perfect use case for
> recirculation.  I hope you realize by now that I'm not going to accept
> a patch even if it implements the immediate use case and is expedient.
>  It needs to be well designed with a reasonable effort at thinking
> about how it might be used in the future.  If the proper way to
> implement something is through recirculation then you must implement
> recirculation.  Since the API you are defining is permanent you can't
> do something else now and hope that someone else fixes it in the
> future.  That being said, I'm fine with not implementing certain
> portions now because that leaves us free to come back to it later once
> all of the pieces are there.  That's why I suggested that you
> implement these actions purely in userspace.  Partial MPLS or slow
> MPLS is better than no MPLS as long as we can improve it in the
> future.
>
> <rk> patch doesn't just address some immediate need, it address future
use-cases and taking performance and optimization into account. I have
given reasons why second approach(using sets) doesn't works for mpls ttl
operations and third approach using recirculation is an overkill unless
your definition of a packet recirculation is different from I have worked
on. Kernel/userspace datapath code in the latest patch are the simplest way
to handle ttl operations without performance impact. Probably, you should
post pseudo-code for over-simplified datapath ttl operations and lets
discuss that.


> > With respect to packet recycling I thought it was for handling layer-3
> after
> > executing mpls actions or could be generic purpose as well but focused on
> > inner-header parsing or further actions executed based on inner-header
> after
> > applying actions to outer-header. Packet recycling for ttl operations
> looks
> > like an overkill to me as you might be aware that packet recycling has
> > performance impact.
>
> Do you actually know what the performance impact is or are you just
> speculating?
>

its not a speculation, packets/sec handling gets reduced. Are you referring
to multiple-table support as recirculation? It is not, its a way to handle
acl/fib/qos.

As mentioned earlier, in the best interest of the time(both mine and yours)
I will have to re-evaluate whether to continue working on both mpls and
qinq and get back to you. Its just not working out as I have done
everything that has been asked and what doesn't make technical sense to me
I have always resorted to discussion and giving the reasons why it is
implemented that way.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-dev/attachments/20120705/d2edeacd/attachment-0003.html>


More information about the dev mailing list