[ovs-dev] [PATCH V2 00/14] Netdev vxlan-decap offload

Eli Britstein elibr at nvidia.com
Tue Feb 23 13:41:10 UTC 2021


On 2/23/2021 3:35 PM, Sriharsha Basavapatna wrote:
> On Tue, Feb 23, 2021 at 5:14 PM Eli Britstein <elibr at nvidia.com> wrote:
>>
>> On 2/23/2021 12:48 PM, Sriharsha Basavapatna wrote:
>>> On Sun, Feb 21, 2021 at 7:04 PM Eli Britstein <elibr at nvidia.com> wrote:
>>>> On 2/18/2021 6:38 PM, Kovacevic, Marko wrote:
>>>>> External email: Use caution opening links or attachments
>>>>>
>>>>>
>>>>> <...>
>>>>>> Sending to Marko. As he wasn't subscribed to ovs-dev then.
>>>>>>
>>>>> <...>
>>>>>>> VXLAN decap in OVS-DPDK configuration consists of two flows:
>>>>>>> F1: in_port(ens1f0),eth(),ipv4(),udp(), actions:tnl_pop(vxlan_sys_4789)
>>>>>>> F2: tunnel(),in_port(vxlan_sys_4789),eth(),ipv4(), actions:ens1f0_0
>>>>>>>
>>>>>>> F1 is a classification flow. It has outer headers matches and it classifies the
>>>>>>> packet as a VXLAN packet, and using tnl_pop action the packet continues
>>>>>>> processing in F2.
>>>>>>> F2 is a flow that has matches on tunnel metadata as well as on the inner
>>>>>>> packet headers (as any other flow).
>>>>>>>
>>>>> <...>
>>>>>
>>>>> Hi Eli,
>>>>>
>>>>> Hi,
>>>>> After testing the patchset it seems  after the tenth patch I start seeing a drop in the scatter performance around ~4% decrease  across all packet sizes tested(112,256,512,1518)
>>>>> Burst measurement see a decrease also but not as much as the scatter does.
>>>> Hi Marko,
>>>>
>>>> Thanks for testing this series.
>>>>
>>>>> Patch10
>>>>> fff1f9168 netdev-offload-dpdk: Support tunnel pop action
>>>> It doesn't make sense that this commit causes any degradation as it only
>>>> enhances offloads that are not in the datapath and not done for
>>>> virtio-user ports in any case.
>>> Patch 10 enables offload for flow F1 with tnl_pop action. If
>>> hw_offload is enabled, then the new code to offload this flow would be
>>> executed for virtio-user ports as well, since this flow is independent
>>> of the end point port (whether virtio or vf-rep).
>> No. virtio-user ports won't have "flow_api" function pointer to dpdk
>> offload provider. Although, this tnl_pop flow is on the PF, so it is not
>> virtio-user.
> I know that virio-user ports won't have "flow-api" function pointers.
> That's not what I meant. While offloading flow-F1, we don't really
> know what the final endpoint port is (virtio or vf-rep), since the
> in_port for flow-F1 is a PF port. So, add_tnl_pop_action() would be
> executed independent of the target destination port (which is
> available as out_port in flow-F2). So, even if the packet is
> eventually destined to a virtio-user port (in F2), F1 still executes
> add_tnl_pop_action().
Right, see below comment.
>
>>> Before this patch (i.e, with the original code in master/2.15),
>>> parse_flow_actions() would fail for TUNNEL_POP action. But with the
>>> new code, this action is processed by the function -
>>> add_tnl_pop_action(). There is some processing in this function,
>>> including a new rte_flow API (rte_flow_tunnel_decap_set) to the PMD.
>>> Maybe this is adding some overhead ?
>> The new API is processed in the offload thread, not in the datapath.
>> Indeed it can affect the datapath, depending if/how the PF's PMD
>> support/implementation.
>>
>> As seen from Marko's configuration line, there is no experimental
>> support, so there are no new offloads either.
> Even if experimental api support is not enabled, if hw-offload is
> enabled in OVS, then add_tnl_pop_action() would still be called ? And
> at the very least these 3 functions would be invoked in that function:
> netdev_ports_get(), vport_to_rte_tunnel() and
> netdev_dpdk_rte_flow_tunnel_decap_set(), the last one returns -1.
Those calls are right, but they occur only when the flow is created and 
in the offload thread and not the datapath, so they should not affect it.
>
> Is hw-offload enabled in Marko's configuration ?
I suppose it is.
>
>
>>> Thanks,
>>> -Harsha
>>>> Could you please double check?
>>>>
>>>> I would expect maybe a degradation with:
>>>>
>>>> Patch 12: 8a21a377c dpif-netdev: Provide orig_in_port in metadata for
>>>> tunneled packets
>>>>
>>>> Patch 6: e548c079d dpif-netdev: Add HW miss packet state recover logic
>>>>
>>>> Could you please double check what is the offending commit?
>>>>
>>>> Do you compile with ALLOW_EXPERIMENTAL_API defined or not?
>>>>
>>>>> The test used for this is a 32 virito-user ports with 1Millions flows.
>>>> Could you please elaborate exactly about your setup and test?
>>>>
>>>> What are "1M flows"? what are the differences between them?
>>>>
>>>> What are the OpenFlow rules you use?
>>>>
>>>> Are there any other configurations set (other_config for example)?
>>>>
>>>> What is being done with the packets in the guest side? all ports are in
>>>> the same VM?
>>>>
>>>>> Traffic @ Phy NIC Rx:
>>>>> Ether()/IP()/UDP()/VXLAN()/Ether()/IP()
>>>>>
>>>>> Burst: on the outer ip we do a burst of 32 packets with same ip then switch for next 32 and so on
>>>>> Scatter: for scatter we do incrementally for 32
>>>>> And on the inner packet we have a total of  1048576 flows
>>>>>
>>>>> I can send on a diagram directly just restricted with html here to send the diagram here of the test setup
>>>> As commented above, I would appreciate more details about your tests and
>>>> setup.
>>>>
>>>> Thanks,
>>>>
>>>> Eli
>>>>
>>>>> Thanks
>>>>> Marko K
>>>> _______________________________________________
>>>> dev mailing list
>>>> dev at openvswitch.org
>>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev


More information about the dev mailing list