[ovs-dev] [RFC V2] netdev-rte-offloads: HW offload virtio-forwarder

Ilya Maximets i.maximets at samsung.com
Fri May 24 12:21:00 UTC 2019


On 22.05.2019 15:10, Simon Horman wrote:
> Hi,
> 
> On Thu, May 16, 2019 at 08:44:31AM +0000, Roni Bar Yanai wrote:
>>> -----Original Message-----
>>> From: Ilya Maximets <i.maximets at samsung.com>
>>> Sent: Wednesday, May 15, 2019 4:37 PM
>>> To: Roni Bar Yanai <roniba at mellanox.com>; ovs-dev at openvswitch.org; Ian
>>> Stokes <ian.stokes at intel.com>; Kevin Traynor <ktraynor at redhat.com>
>>> Cc: Eyal Lavee <elavee at mellanox.com>; Oz Shlomo <ozsh at mellanox.com>; Eli
>>> Britstein <elibr at mellanox.com>; Rony Efraim <ronye at mellanox.com>; Asaf
>>> Penso <asafp at mellanox.com>
>>> Subject: Re: [RFC V2] netdev-rte-offloads: HW offload virtio-forwarder
>>>
>>> On 15.05.2019 16:01, Roni Bar Yanai wrote:
>>>> Hi Ilya,
>>>>
>>>> Thanks for the comment.
>>>>
>>>> I think the suggested arch is very good and has many advantages, and
>>>> in fact I had something very similar as my internally first approach.
>>>>
>>>> However, I had one problem: it doesn't solves the kernel case. It make
>>>> sense doing forwarding using dpdk also when OVS is kernel (port
>>>> representor and rule offloads are done with kernel OVS). It makes
>>>> sense because we can have one solution and because DPDK has better
>>>> performance.
>>>
>>> I'm not sure if it makes practical sense to run separate userpace
>>> datapath just to pass packets between vhost and VF. This actually
>>> matches with some of your own disadvantages of separate DPDK apps.
>>> Separate userspace datapath will need its own complex start,
>>> configuration and maintenance. Also it will consume additional cpu cores
>>> which will not be shared with kernel packet processing.  I think that
>>> just move everything to userspace in this case would be much more simple
>>> for user than maintaining such configuration.
>>
>> Maybe It doesn't make sense for OVS-DPDK but for OVS users it does.  When
>> you run offload with OVS-kernel, and for some vendors this is the current
>> status, and virtio is a requirement, you now have millions of packets
>> that should be forwarded. Basically you have two options:
>>
>> 1. use external application (we discussed that).
>>
>> 2. create user space data plane and configure forwarding (OVS), but then
>> you have performance issues as OVS is not optimized for this. And for
>> kernel data plane much worse off course.
>>
>> Regarding burning a core. In case of HW offload you will do it either
>> way, and there is no benefit for adding FW functionality for kernel data
>> path, mainly because of kernel performance limitations.
>>
>> I agree that in such case moving to user space is a solution for some,
>> but keep in mind that some doesn't have such support for DPDK and for
>> others they have their own OVS based data path with their adjustments, so
>> it will be a hard transition.
>>
>> While arch is good for the two DPDK use cases, it leaves the kernel one
>> out.  Any thoughts how we can add this use case as well and still keep
>> the suggested arch?
> 
> ...
> 
> At Netronome we have an Open Source standalone application,
> called virtio-forwarder (https://github.com/Netronome/virtio-forwarder).
> The reason that we provide this solution is that we see this as a
> requirement for some customers. This includes customers using OVS
> with the kernel based HW offload (OVS-TC).
> 
> In general I agree that integration with OVS has some advantages and
> I'm happy to see this issue being discussed. But as we see demand
> for use of virtio-forwarder in conjunction with OVS-TC I see that
> as a requirement for a solution that is integrated with OVS, which leads
> me to lean towards the proposal put forward by Roni.
> 
> I also feel that the proposal put forward by Roni is likely to prove more
> flexible that a port-based approach, as proposed by Ilya. For one thing
> such a design ought to allow for arbitrary combinations of port types.
> In fact, it would be entirely feasible to run this in conjunction with a
> non-OVS offload aware NIC (SR-IOV in VEB mode).
> 
> Returning to the stand-alone Netronome implementation, I would welcome
> discussion of how any useful portions of this could be reused.
> 

Hi Simon. Thanks for link. It's very interesting.

My key point about the proposal put forward by Roni is that Open vSwitch
is an *OF switch* first of all and *not the multitool*. This proposal adds
some parasite work to the main OVS workflow which doesn't connected
with its main purpose. If you really want this implemented, this should
probably be done inside DPDK. You may implement a virtual device in DPDK
(like bonding) that will forward traffic between subports while calling
receive function. Adding this vdev to OVS as a usual DPDK port you will
be able to achieve your goal. DPDK as a development kit (an actual multitool)
is much more appropriate place for such solutions.

BTW, the root cause of this approach is the slow packet forwarding in OVS
in compare with direct rx + tx without parsing.
OVS performance improvement is probably the right direction where we
can move to achieve reasonably effective packet forwarding. I prepared
a patch that should allow much faster packet forwarding for direct
output flows like "in_port=1,actions=output:2". Take a look here:
    https://patchwork.ozlabs.org/patch/1104878/
It still will be slower than "no parsing at all", but could be suitable
in practice for some use cases.

Best regards, Ilya Maximets.


More information about the dev mailing list