[ovs-dev] [PATCH RFC v2 0/6] dpif-netdev: Manual pinning of RX queues + XPS.

Daniele Di Proietto diproiettod at vmware.com
Thu Jun 2 01:33:18 UTC 2016


Hi Ilya,

apologies for the delay.

I didn't take a extremely detailed look at this series, but I have
a few high level comments.

Thanks for adding a command to configure the rxq affinity.  Have
you thought about using the database instead?  I think it will
be easier to use because it survives restarts, and one can batch
the affinity assignment for multiple ports without explicitly
calling pmd-reconfigure.  I'm not sure what the best interface
would look like. Perhaps a string in Interface:other_config that
maps rxqs with core ids?

I'd prefer to avoid exporting an explicit command like
dpif-netdev/pmd-reconfigure.  If we use the database we don't have to,
right?

I'm not sure what's the best way to introduce XPS in OVS.  First of all,
for physical NICs I'd want to keep the current model where possible
(one queue per core, no locking).  Maybe we can introduce some mapping
just for vhost-user ports in netdev-dpdk to implement packet steering?

Also, have you considered packet reordering issues?  If a 5-tuple is
processed by one core and we often change the tx queue, we end up
with the same 5-tuple on two different tx queues.

Lastly I'm not 100% sure we need a "n-txq" parameter.  I think that
for vhost-user ports, we know the value in new_device() (we should
do that for rxqs too now that we have netdev_reconfigure).  physical
NICs txqs should match the cpu cores and avoid locking, where
possible.

What do you think?

Thanks,

Daniele


On 24/05/2016 06:34, "Ilya Maximets" <i.maximets at samsung.com> wrote:

>Manual pinning of RX queues to PMD threads required for performance
>optimisation. This will give to user ability to achieve max. performance
>using less number of CPUs because currently only user may know which
>ports are heavy loaded and which are not.
>
>To give full controll on ports TX queue manipulation mechanisms also
>required. For example, to avoid issue described in 'dpif-netdev: XPS
>(Transmit Packet Steering) implementation.' which becomes worse with
>ability of manual pinning.
>( http://openvswitch.org/pipermail/dev/2016-March/067152.html )
>
>First 3 patches: prerequisites to XPS implementation.
>Patch #4: XPS implementation.
>Patches #5 and #6: Manual pinning implementation.
>
>Version 2:
>	* Rebased on current master.
>	* Fixed initialization of newly allocated memory in
>	  'port_reconfigure()'.
>
>Ilya Maximets (6):
>  netdev-dpdk: Use instant sending instead of queueing of packets.
>  dpif-netdev: Allow configuration of number of tx queues.
>  netdev-dpdk: Mandatory locking of TX queues.
>  dpif-netdev: XPS (Transmit Packet Steering) implementation.
>  dpif-netdev: Add dpif-netdev/pmd-reconfigure appctl command.
>  dpif-netdev: Add dpif-netdev/pmd-rxq-set appctl command.
>
> INSTALL.DPDK.md            |  44 +++--
> NEWS                       |   4 +
> lib/dpif-netdev.c          | 393 ++++++++++++++++++++++++++++++++++-----------
> lib/netdev-bsd.c           |   1 -
> lib/netdev-dpdk.c          | 198 ++++++-----------------
> lib/netdev-dummy.c         |   1 -
> lib/netdev-linux.c         |   1 -
> lib/netdev-provider.h      |  18 +--
> lib/netdev-vport.c         |   1 -
> lib/netdev.c               |  30 ----
> lib/netdev.h               |   1 -
> vswitchd/ovs-vswitchd.8.in |  10 ++
> 12 files changed, 400 insertions(+), 302 deletions(-)
>
>-- 
>2.5.0


More information about the dev mailing list