[ovs-dev] [PATCH 14/25] netdev-offload-dpdk: Implement HW miss packet recover for vport
Eli Britstein
elibr at mellanox.com
Sun Jul 12 06:18:51 UTC 2020
On 7/11/2020 9:36 PM, William Tu wrote:
> Hi Eli,
> Thanks for the patch, very interesting work.
> I'm trying to understand the patch. some questions below:
>
> On Mon, Jan 20, 2020 at 7:09 AM Eli Britstein <elibr at mellanox.com> wrote:
>> A miss in virtual port offloads means the flow with tnl_pop was
>> offloaded, but not the following one. Recover the state and continue
>> with SW processing.
> Why do we have a miss in virtual port offloads? Isn't we only
> offloading to physical port or uplink?
> at patch 25/25, you mentioned
> "For virtual port (as "vxlan"), HW rules match tunnel properties
> (outer header) and inner packet fields, and with a decap action. The
> rules are placed on all uplinks as they are the potential for the origin
> of the traffic."
Yes, the offloads are only for physical ports. Regarding misses, please
see below.
>
>> Co-authored-by: Eli Britstein <elibr at mellanox.com>
>> Signed-off-by: Ophir Munk <ophirmu at mellanox.com>
>> Reviewed-by: Roni Bar Yanai <roniba at mellanox.com>
>> Signed-off-by: Eli Britstein <elibr at mellanox.com>
>> ---
>> lib/netdev-offload-dpdk.c | 34 +++++++++++++++++++++++++++++++++-
>> 1 file changed, 33 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/netdev-offload-dpdk.c b/lib/netdev-offload-dpdk.c
>> index fc890b915..c4d77c115 100644
>> --- a/lib/netdev-offload-dpdk.c
>> +++ b/lib/netdev-offload-dpdk.c
>> @@ -385,7 +385,6 @@ put_flow_miss_ctx_id(uint32_t flow_ctx_id)
>> put_context_data_by_id(&flow_miss_ctx_md, flow_ctx_id);
>> }
>>
>> -OVS_UNUSED
>> static int
>> find_flow_miss_ctx(int flow_ctx_id, struct flow_miss_ctx *ctx)
>> {
>> @@ -1769,10 +1768,43 @@ out:
>> return ret;
>> }
>>
>> +static int
>> +netdev_offload_dpdk_hw_miss_packet_recover(struct netdev *netdev,
>> + uint32_t flow_miss_ctx_id,
>> + struct dp_packet *packet)
>> +{
>> + struct flow_miss_ctx flow_miss_ctx;
>> + struct netdev *vport_netdev;
>> +
>> + if (find_flow_miss_ctx(flow_miss_ctx_id, &flow_miss_ctx)) {
>> + return -1;
>> + }
>> +
>> + if (flow_miss_ctx.vport != ODPP_NONE) {
>> + vport_netdev = netdev_ports_get(flow_miss_ctx.vport,
>> + netdev->dpif_type);
>> + if (vport_netdev) {
>> + pkt_metadata_init(&packet->md, flow_miss_ctx.vport);
>> + if (vport_netdev->netdev_class->pop_header) {
>> + vport_netdev->netdev_class->pop_header(packet);
> IIUC, we need to pop header here because now we translate
> tnl_pop to mark + jump.
> So the outer tunnel header does not get pop/unset in hardware,
> so at SW side before upcall to OVS, we need to pop here, right?
The HW offload flows follow the SW model. Once we have more than one
flow to process a packet in HW, we are exposed to misses, in case the
packet doesn't complete its full path in HW, but only part of it. In
this case, the first flow (the classification flow) is hit, but the
vport's one may not exist. We should recover the packet as if it was
processed in SW from the beginning, to maintain correctness of OVS. For
example counters - the HW already counted in the first flow. We should
not count it again in SW.
Please note that we would like to progress with an approach to enhance
DPDK and put some of the logic inside the PMD. See:
Link to RFC: http://mails.dpdk.org/archives/dev/2020-June/169656.html
<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-June%2F169656.html&data=02%7C01%7Celibr%40mellanox.com%7C164938c31cfc4d387ec508d82485207e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637299501500880396&sdata=tGbu071CKUaKVI3gL9oSU%2FQAL998gF8ydnYmKL2ml6o%3D&reserved=0>
Link to patchset:
http://mails.dpdk.org/archives/dev/2020-June/171590.html
<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-June%2F171590.html&data=02%7C01%7Celibr%40mellanox.com%7C164938c31cfc4d387ec508d82485207e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637299501500880396&sdata=Y0NLf0u5QNU%2FPxOkSYdP66fKf9Z0HFE%2B7FPTRNQhG0g%3D&reserved=0>
>
> William
More information about the dev
mailing list