[ovs-discuss] custom field in OVS flow rules

Eryk Schiller schiller at inf.unibe.ch
Fri Feb 24 10:45:52 UTC 2017


Dear Ben,

Dear all,

I have implemented the kernel/userspace logic, which seems to be 
partially working. I was following the implementation of 
tcp_flags/ct_mark, etc. in the previous commits to OVS. The gtp_teid 
field matches the GTP-TEID of the GPRS Tunneling Protocol sent as a UDP 
data packet.

I can install my rules,

ovs-ofctl add-flow ovs-br 
udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=0x1,action=NORMAL

ovs-ofctl -O add-flow ovs-br 
udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=0xcccccccc,action=NORMAL

and when I sent two simultaneous flows (with 10000 packets each) with 
matching gtp_teids (i.e., 0x00000001, 0xcccccccc), it is well matched by 
the switch, (10000 packets on the 0x00000001 rule, and 10000 on the 
0xcccccccc) rule.

ovs-ofctl dump-flows ovs-br
NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=190.486s, table=0, n_packets=10000, 
n_bytes=1340000, idle_age=5, 
udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=1 actions=NORMAL
  cookie=0x0, duration=185.518s, table=0, n_packets=10000, 
n_bytes=1340000, idle_age=5, 
udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=3435973836 actions=NORMAL
  cookie=0x0, duration=36390.336s, table=0, n_packets=81, n_bytes=10934, 
idle_age=0, priority=0 actions=NORMAL

I repeat this experiment with two other flows (i.e., 0x00000001 and 
0xaaaaaaaa), and in this case the behavior is strange

ovs-ofctl dump-flows ovs-br
NXST_FLOW reply (xid=0x4):
  cookie=0x0, duration=34.679s, table=0, n_packets=353, n_bytes=47302, 
idle_age=2, udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=1 actions=NORMAL
  cookie=0x0, duration=31.775s, table=0, n_packets=0, n_bytes=0, 
idle_age=31, udp,in_port=4,tp_src=2152,tp_dst=2152,gtp_teid=3435973836 
actions=NORMAL
  cookie=0x0, duration=42.545s, table=0, n_packets=19653, 
n_bytes=2633670, idle_age=2, priority=0 actions=NORMAL

A few packets matching the 0x00000001 rule went through, however, 
eventually the default rule, i.e., without specific matching started 
matching my packets with gtp_teid of 0x00000001 (even though they should 
be matched by the first rule). The rule with 0xcccccccc has 0 matches, 
as there are no packets with such a gtp_teid (0xaaaaaaaa packets are 
matched by the default rule from the beginning). Do you know what might 
be a reason for this behavior?


Best regards,

Eryk Schiller

On 02/09/2017 07:38 PM, Ben Pfaff wrote:
> To add support for a new field to the datapath, I'd look for another
> commit that does that and use it as a template.
>
> To make every packet go to userspace, the most general way is to make
> odp_flow_key_to_flow() return ODP_FIT_TOO_LITTLE for flows that should
> have the field.  For example, if your new field is present in every
> packet, then return ODP_FIT_TOO_LITTLE for every packet; if your new
> field is present in every TCP packet, then it's better to just return
> ODP_FIT_TOO_LITTLE for TCP packets.
>
> As an alternative you can add one of SLOW_* bits to ctx->xout->slow
> during flow translation, look around ofproto-dpif-xlate.c for examples.
>
> On Thu, Feb 09, 2017 at 06:12:01PM +0100, Eryk Schiller wrote:
>> Dear Ben,
>>
>> Yes, done. Thx.
>>
>> Another question; it seems to be that only the first packet of the flow is
>> appropriately matched through my user-space rule, while the rest are pretty
>> much ignored. Is there any manual about extending the kernel datapath to
>> appropriately cache my field?
>>
>> I think that for the moment, I could also live without the cache. Is there
>> any way to switch it off so that all packets (for example for a certain
>> flow) always go through ovs-vswitchd?
>>
>> Best regards,
>>
>> Eryk Schiller
>>
>>
>> Quoting Ben Pfaff <blp at ovn.org>:
>>
>>> On Wed, Feb 08, 2017 at 10:29:29PM +0100, Eryk Schiller wrote:
>>>> Dear all,
>>>>
>>>> I am writing this post, because I saw a discussion from the beginning of
>>>> 2016 about the implementation of an additional matching field of UDP in OVS.
>>>> Maybe you can help with a similar implementation.
>>>>
>>>> The discussion is here,
>>>>
>>>> https://mail.openvswitch.org/pipermail/ovs-discuss/2016-April/040894.html
>>>>
>>>> and I found another similar patch implementing some IGMP functionality here.
>>>>
>>>> https://patchwork.ozlabs.org/patch/555337/
>>>>
>>>> I went through the FAQ, discussion, and the aforementioned patch, and
>>>> implemented a new custom user-space matching rule.
>>>>
>>>> However, when I add my field to flow rules, i.e.,
>>>>
>>>> ovs-ofctl --verbose -O OpenFlow15 add-flow ovs-br
>>>> in_port=4,ip,udp,my_field=0x6,action=normal
>>>> 2017-02-08T20:42:13Z|00001|hmap|DBG|lib/shash.c:112: 7 nodes in bucket (64
>>>> nodes, 32 buckets)
>>>> 2017-02-08T20:42:13Z|00002|hmap|DBG|lib/shash.c:112: 6 nodes in bucket (128
>>>> nodes, 64 buckets)
>>>> 2017-02-08T20:42:13Z|00003|hmap|DBG|lib/shash.c:112: 7 nodes in bucket (128
>>>> nodes, 64 buckets)
>>>> 2017-02-08T20:42:13Z|00004|hmap|DBG|lib/shash.c:112: 7 nodes in bucket (128
>>>> nodes, 64 buckets)
>>>> 2017-02-08T20:42:13Z|00005|stream_unix|DBG|/var/run/openvswitch/ovs-br:
>>>> connection failed (No such file or directory)
>>>> 2017-02-08T20:42:13Z|00006|ofctl|DBG|connecting to
>>>> unix:/var/run/openvswitch/ovs-br.mgmt
>>>> 2017-02-08T20:42:13Z|00007|hmap|DBG|lib/ofp-msgs.c:1143: 6 nodes in bucket
>>>> (128 nodes, 64 buckets)
>>>> 2017-02-08T20:42:13Z|00008|hmap|DBG|lib/ofp-msgs.c:1143: 6 nodes in bucket
>>>> (256 nodes, 128 buckets)
>>>> 2017-02-08T20:42:13Z|00009|hmap|DBG|lib/ofp-msgs.c:1143: 7 nodes in bucket
>>>> (512 nodes, 256 buckets)
>>>> 2017-02-08T20:42:13Z|00010|hmap|DBG|lib/ofp-msgs.c:1143: 8 nodes in bucket
>>>> (512 nodes, 256 buckets)
>>>> 2017-02-08T20:42:13Z|00011|hmap|DBG|lib/ofp-msgs.c:1143: 6 nodes in bucket
>>>> (512 nodes, 256 buckets)
>>>> 2017-02-08T20:42:13Z|00012|hmap|DBG|lib/ofp-msgs.c:1143: 7 nodes in bucket
>>>> (512 nodes, 256 buckets)
>>>> 2017-02-08T20:42:13Z|00013|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> sent (Success): OFPT_HELLO (OF1.5) (xid=0x1):
>>>> version bitmap: 0x06
>>>> 2017-02-08T20:42:13Z|00014|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> received: OFPT_HELLO (OF1.5) (xid=0x39):
>>>> version bitmap: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06
>>>> 2017-02-08T20:42:13Z|00015|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> negotiated OpenFlow version 0x06 (we support version 0x06, peer supports
>>>> version 0x06 and earlier)
>>>> 2017-02-08T20:42:13Z|00016|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> sent (Success): OFPT_FLOW_MOD (OF1.5) (xid=0x2): ADD *udp,in_port=4*
>>>> actions=NORMAL
>>>> 2017-02-08T20:42:13Z|00017|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> sent (Success): OFPT_BARRIER_REQUEST (OF1.5) (xid=0x3):
>>>> 2017-02-08T20:42:13Z|00018|poll_loop|DBG|wakeup due to 0-ms timeout
>>>> 2017-02-08T20:42:13Z|00019|poll_loop|DBG|wakeup due to [POLLIN] on fd 4
>>>> (<->/var/run/openvswitch/ovs-br.mgmt) at lib/stream-fd.c:155
>>>> 2017-02-08T20:42:13Z|00020|vconn|DBG|unix:/var/run/openvswitch/ovs-br.mgmt:
>>>> received: OFPT_BARRIER_REPLY (OF1.5) (xid=0x3):
>>>>
>>>> There is only an ADD for udp,in_port=4; so my_field seems to be ignored, but
>>>> surprisingly overall the switch does what I want. Moreover, dump-flows does
>>>> not recognize this my_field properly either.
>>>>
>>>> The question is what is the proper way to include custom my_field in OF
>>>> messages so that I can use it with ovs-ofctl add-flow and ovs-ofctl
>>>> dump-flows? Is there any additional answer to that in FAQ?
>>> It sounds like your new field just isn't being printed as part of
>>> formatting a match.  Make sure that you added it to match_format() in
>>> lib/match.c.
>>
>> _______________________________________________
>> discuss mailing list
>> discuss at openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


More information about the discuss mailing list