[ovs-dev] [RFC V2] netdev-rte-offloads: HW offload virtio-forwarder

Simon Horman simon.horman at netronome.com
Wed May 22 12:10:01 UTC 2019


Hi,

On Thu, May 16, 2019 at 08:44:31AM +0000, Roni Bar Yanai wrote:
> >-----Original Message-----
> > From: Ilya Maximets <i.maximets at samsung.com>
> > Sent: Wednesday, May 15, 2019 4:37 PM
> > To: Roni Bar Yanai <roniba at mellanox.com>; ovs-dev at openvswitch.org; Ian
> > Stokes <ian.stokes at intel.com>; Kevin Traynor <ktraynor at redhat.com>
> > Cc: Eyal Lavee <elavee at mellanox.com>; Oz Shlomo <ozsh at mellanox.com>; Eli
> > Britstein <elibr at mellanox.com>; Rony Efraim <ronye at mellanox.com>; Asaf
> > Penso <asafp at mellanox.com>
> > Subject: Re: [RFC V2] netdev-rte-offloads: HW offload virtio-forwarder
> >
> > On 15.05.2019 16:01, Roni Bar Yanai wrote:
> > > Hi Ilya,
> > >
> > > Thanks for the comment.
> > >
> > > I think the suggested arch is very good and has many advantages, and
> > > in fact I had something very similar as my internally first approach.
> > >
> > > However, I had one problem: it doesn't solves the kernel case. It make
> > > sense doing forwarding using dpdk also when OVS is kernel (port
> > > representor and rule offloads are done with kernel OVS). It makes
> > > sense because we can have one solution and because DPDK has better
> > > performance.
> >
> > I'm not sure if it makes practical sense to run separate userpace
> > datapath just to pass packets between vhost and VF. This actually
> > matches with some of your own disadvantages of separate DPDK apps.
> > Separate userspace datapath will need its own complex start,
> > configuration and maintenance. Also it will consume additional cpu cores
> > which will not be shared with kernel packet processing.  I think that
> > just move everything to userspace in this case would be much more simple
> > for user than maintaining such configuration.
>
> Maybe It doesn't make sense for OVS-DPDK but for OVS users it does.  When
> you run offload with OVS-kernel, and for some vendors this is the current
> status, and virtio is a requirement, you now have millions of packets
> that should be forwarded. Basically you have two options:
>
> 1. use external application (we discussed that).
>
> 2. create user space data plane and configure forwarding (OVS), but then
> you have performance issues as OVS is not optimized for this. And for
> kernel data plane much worse off course.
>
> Regarding burning a core. In case of HW offload you will do it either
> way, and there is no benefit for adding FW functionality for kernel data
> path, mainly because of kernel performance limitations.
>
> I agree that in such case moving to user space is a solution for some,
> but keep in mind that some doesn't have such support for DPDK and for
> others they have their own OVS based data path with their adjustments, so
> it will be a hard transition.
>
> While arch is good for the two DPDK use cases, it leaves the kernel one
> out.  Any thoughts how we can add this use case as well and still keep
> the suggested arch?

...

At Netronome we have an Open Source standalone application,
called virtio-forwarder (https://github.com/Netronome/virtio-forwarder).
The reason that we provide this solution is that we see this as a
requirement for some customers. This includes customers using OVS
with the kernel based HW offload (OVS-TC).

In general I agree that integration with OVS has some advantages and
I'm happy to see this issue being discussed. But as we see demand
for use of virtio-forwarder in conjunction with OVS-TC I see that
as a requirement for a solution that is integrated with OVS, which leads
me to lean towards the proposal put forward by Roni.

I also feel that the proposal put forward by Roni is likely to prove more
flexible that a port-based approach, as proposed by Ilya. For one thing
such a design ought to allow for arbitrary combinations of port types.
In fact, it would be entirely feasible to run this in conjunction with a
non-OVS offload aware NIC (SR-IOV in VEB mode).

Returning to the stand-alone Netronome implementation, I would welcome
discussion of how any useful portions of this could be reused.


More information about the dev mailing list