[ovs-dev] [PATCH 4/4] netdev-tc-offloads: Add support for action set

Paul Blakey paulb at mellanox.com
Tue Aug 15 08:19:05 UTC 2017



On 15/08/2017 10:28, Paul Blakey wrote:
> 
> 
> On 15/08/2017 04:04, Joe Stringer wrote:
>> On 8 August 2017 at 07:21, Roi Dayan <roid at mellanox.com> wrote:
>>> From: Paul Blakey <paulb at mellanox.com>
>>>
>>> Implement support for offloading ovs action set using
>>> tc header rewrite action.
>>>
>>> Signed-off-by: Paul Blakey <paulb at mellanox.com>
>>> Reviewed-by: Roi Dayan <roid at mellanox.com>
>>> ---
>>
>> <snip>
>>
>>> @@ -274,6 +346,48 @@ netdev_tc_flow_dump_destroy(struct 
>>> netdev_flow_dump *dump)
>>>       return 0;
>>>   }
>>>
>>> +static void
>>> +parse_flower_rewrite_to_netlink_action(struct ofpbuf *buf,
>>> +                                       struct tc_flower *flower)
>>> +{
>>> +    char *mask = (char *) &flower->rewrite.mask;
>>> +    char *data = (char *) &flower->rewrite.key;
>>> +
>>> +    for (int type = 0; type < ARRAY_SIZE(set_flower_map); type++) {
>>> +        char *put = NULL;
>>> +        size_t nested = 0;
>>> +        int len = ovs_flow_key_attr_lens[type].len;
>>> +
>>> +        if (len <= 0) {
>>> +            continue;
>>> +        }
>>> +
>>> +        for (int j = 0; j < ARRAY_SIZE(set_flower_map[type]); j++) {
>>> +            struct netlink_field *f = &set_flower_map[type][j];
>>> +
>>> +            if (!f->size) {
>>> +                break;
>>> +            }
>>
>> Seems like maybe there's similar feedback here to the previous patch
>> wrt sparsely populated array set_flower_map[].
>>
> 
> I'll answer there.

Scratch that, I actually meant to write it here.

With this map we have fast (direct lookup) access on the put path 
(parse_put_flow_set_masked_action) , and slow access on the reverse dump 
path (parse_flower_rewrite_to_netlink_action) which iterates over all 
indices, although it should skips them  fast if they are empty.

Changing this to sparse map will slow both of the paths, but dumping 
would be faster. We didn't write this maps for performance but for 
convince of adding new supported attributes.

What we can do to both maps is:

1) Using ovs thread once, we can create a fast (direct lookup) access 
map for both paths - generating a reverse map at init time and using 
that instead for dumping.
2) Rewrite it in a non sparse way, use it for both.
3) Rewrite it using a switch case, the logic for now is pretty much the 
same for all attributes.

What do you think?



> 
>>> +
>>> +            if (!is_all_zeros(mask + f->flower_offset, f->size)) {
>>> +                if (!put) {
>>> +                    nested = nl_msg_start_nested(buf,
>>> +                                                 
>>> OVS_ACTION_ATTR_SET_MASKED);
>>> +                    put = nl_msg_put_unspec_zero(buf, type, len * 2);
>>
>> Do we ever have multiple of these attributes in a single nested
>> OVS_ACTION_ATTR_SET_MASKED? If so, won't the following memcpys avoid
>> putting the netlink header for the subsequent sets?
>>
> 
>   OVS_ACTION_ATTR_SET_MASKED always has a single nested OVS_KEY_ATTR_* 
> attribute so we put it (all zeros) the first time we find a sub 
> attribute (e.g ipv4_dst of ipv4) that isn't zero and use memcpy below to 
> overwrite the non zero sub attributes.
> 
>>> +                }
>>> +
>>> +                memcpy(put + f->offset, data + f->flower_offset, 
>>> f->size);
>>> +                memcpy(put + len + f->offset,
>>> +                       mask + f->flower_offset, f->size);
>>
>> Could we these combine with the nl_msg_put_unspec() above?
>>
> 
> We'd have to accumulate the data as we go over the inner loop and then 
> write it.
> We could, for example, create a stack buffer like char 
> buff[MAX_ATTR_LEN] (kinda ugly), write it there, and if we wrote 
> something to buff, flush it to a nested attribute in a single write.
> Would that be better? It might be less readable and it is less flexible 
> (defining of MAX_ATTR_LEN).
> 
> 
> 
> 
> 
> 
> 


More information about the dev mailing list