[ovs-dev] [RFC] dpif-netdev: only poll enabled vhost queues

David Marchand david.marchand at redhat.com
Mon Apr 8 13:44:55 UTC 2019


Hello Ilya,

On Mon, Apr 8, 2019 at 10:27 AM Ilya Maximets <i.maximets at samsung.com>
wrote:

> On 04.04.2019 22:49, David Marchand wrote:
> > We tried to lower the number of rebalances but we don't have a
> > satisfying solution at the moment, so this patch rebalances on each
> > update.
>
> Hi.
>
> Triggering the reconfiguration on each vring state change is a bad thing.
> This could be abused by the guest to break the host networking by infinite
> disabling/enabling queues. Each reconfiguration leads to removing ports
> from the PMD port caches and their reloads. On rescheduling all the ports
>

I'd say the reconfiguration itself is not wanted here.
Only rebalancing the queues would be enough.


could be moved to a different PMD threads resulting in EMC/SMC/dpcls
> invalidation and subsequent upcalls/packet reorderings.
>

I agree that rebalancing does trigger EMC/SMC/dpcls invalidation when
moving queues.
However, EMC/SMC/dpcls are per pmd specific, where would we have packet
reordering ?



> Same issues was discussed previously while looking at possibility of
> vhost-pmd integration (with some test results):
> https://mail.openvswitch.org/pipermail/ovs-dev/2016-August/320430.html


Thanks for the link, I will test this.



> One more reference:
> 7f5f2bd0ce43 ("netdev-dpdk: Avoid reconfiguration on reconnection of same
> vhost device.")
>

Yes, I saw this patch.
Are we safe against guest drivers/applications that play with
VIRTIO_NET_F_MQ, swapping it continuously ?




> Anyway, do you have some numbers of how much time PMD thread spends on
> polling
> disabled queues? What the performance improvement you're able to achieve by
> avoiding that?
>

With a simple pvp setup of mine.
1c/2t poll two physical ports.
1c/2t poll four vhost ports with 16 queues each.
  Only one queue is enabled on each virtio device attached by the guest.
  The first two virtio devices are bound to the virtio kmod.
  The last two virtio devices are bound to vfio-pci and used to forward
incoming traffic with testpmd.

The forwarding zeroloss rate goes from 5.2Mpps (polling all 64 vhost
queues) to 6.2Mpps (polling only the 4 enabled vhost queues).



-- 
David Marchand


More information about the dev mailing list