[ovs-discuss] Is there an option to disable metadata propagation on patch ports?

Ben Pfaff blp at ovn.org
Fri Apr 6 15:43:48 UTC 2018


Yes.

On Fri, Apr 06, 2018 at 12:49:21PM +0200, Alan Kayahan wrote:
> Thanks Ben. So then the flow struct, along with its members excluding
> metadata, tunnel and regs propagate, correct?
> 
> 
> 2018-04-05 21:30 GMT+02:00 Ben Pfaff <blp at ovn.org>:
> 
> > compose_output_action__(), in ofproto/ofproto-dpif-xlate.c, clears
> > metadata for patch port traversals in the block that begins "if
> > (xport->peer)", with this code:
> >
> >         flow->metadata = htonll(0);
> >         memset(&flow->tunnel, 0, sizeof flow->tunnel);
> >         memset(flow->regs, 0, sizeof flow->regs);
> >
> >
> > On Thu, Apr 05, 2018 at 08:52:31PM +0200, Alan Kayahan wrote:
> > > That....is my question and theory :) How can I see that? Can you point me
> > > to the code where such decision is taken?
> > >
> > > 2018-04-05 20:39 GMT+02:00 Ben Pfaff <blp at ovn.org>:
> > >
> > > > What metadata is propagating?
> > > >
> > > > On Thu, Apr 05, 2018 at 08:14:51PM +0200, Alan Kayahan wrote:
> > > > > I introduced a new 32bit field in flow.h to match on. A push_header
> > > > action
> > > > > appends a header, which contains TLVs, to the packet and sets this
> > field
> > > > > for the first time. An increment action sets this field to a value
> > that
> > > > > resides in the first TLV, if increment action is called again, the
> > field
> > > > is
> > > > > set to the value in the second TLV and so on.
> > > > >
> > > > > I have a chain of br0-br1-br2 connected via patch ports. In br0 flow
> > rule
> > > > > calls push_header and out to br1. In br1, the flow rule matches on
> > the
> > > > port
> > > > > connected to br0, plus match on the 32bit field.The action for this
> > rule
> > > > is
> > > > > increment, then out to br2. Just like br1, br2 matches on the port
> > from
> > > > br1
> > > > > plus the match on the 32bit field for the value in the TLV.
> > > > >
> > > > > Everything works well until br2. But the match on the field for the
> > value
> > > > > in the TLV doesn't work. If I remove the match on the field at br2
> > and
> > > > just
> > > > > use in_port and redirect the traffic to packet hex printer, I do see
> > that
> > > > > the value in the TLV has been set correctly at the offset of this
> > 32bit
> > > > > field.
> > > > >
> > > > > The increment action depends on the iteration of TLVs in the header
> > at
> > > > each
> > > > > packet, therefore is at datapath and can not update the flow
> > context. So
> > > > if
> > > > > the flow metadata is propagating (which seems to be the only
> > > > explanation),
> > > > > the new field is set to the old value, which is the culprit. Perhaps
> > a
> > > > > better approach is to create an own context just like nat, iterate
> > the
> > > > TLVs
> > > > > and populate the context with all available values when push_header
> > is
> > > > > first called, then redesign the increment action so that it just
> > pops the
> > > > > next value from the context and emits the data-plane action.
> > Eliminates
> > > > > TLVs iteration per packet at the dataplane.
> > > > >
> > > > > But just to save the day, do you have a trick to stop this
> > propagation?
> > > > Or
> > > > > do you think the problem might be something else?
> > > > >
> > > > > Thanks!
> > > > >
> > > > > 2018-04-05 19:41 GMT+02:00 Ben Pfaff <blp at ovn.org>:
> > > > >
> > > > > > On Thu, Apr 05, 2018 at 07:31:35PM +0200, Alan Kayahan wrote:
> > > > > > > OVS patch ports allow the propagation of the metadata (e.g. flow
> > > > context)
> > > > > > > across the connected switches.
> > > > > > >
> > > > > > > Is there an option to disable the metadata propagation feature? I
> > > > need
> > > > > > this
> > > > > > > for my research to benchmark certain behavior. Or patch ports
> > become
> > > > > > > nothing but veth pairs when this feature is disabled?
> > > > > >
> > > > > > Patch ports zero out most metadata.  What additional metadata do
> > you
> > > > > > want them to clear?
> > > > > >
> > > >
> >


More information about the discuss mailing list