[ovs-dev] [PATCH v4 0/2] vHost Dequeue Zero Copy

Jan Scheurich jan.scheurich at ericsson.com
Tue Nov 28 17:04:15 UTC 2017


> > Can you comment on that? Can a user also reduce the problem by
> > configuring
> > a) a larger virtio Tx queue size (up to 1K) in Qemu, or
> 
> Is this possible right now without modifying QEMU src? I think the size is hardcoded to 256 at the moment although it may become
> configurable in the future. If/when it does, we can test and update the docs if it does solve the problem. I don’t think we should suggest
> modifying the QEMU src as a workaround now.

The possibility to configure the tx queue size has been upstreamed in Qemu 2.10:

commit 9b02e1618cf26aa52cf786f215d757506dda14f8
Author: Wei Wang <wei.w.wang at intel.com>
Date:   Wed Jun 28 10:37:59 2017 +0800

    virtio-net: enable configurable tx queue size

    This patch enables the virtio-net tx queue size to be configurable
    between 256 (the default queue size) and 1024 by the user when the
    vhost-user backend is used....

So you should be able to test larger tx queue sizes with Qemu 2.10.

> 
> > b) a larger mempool for packets in Tx direction inside the guest (driver?)
> 
> Using the DPDK driver in the guest & generating traffic via testpmd I modified the number of descriptors given to the virtio device from
> 512 (default) to 2048 & 4096 but it didn't resolve the issue unfortunately.

I re-read the virtio 1.0 spec and it states that the total number of virtio descriptors per virtqueue equals the size of the virtqueue. Descriptors just point to guest mbufs. The mempool the guest driver uses for mbufs is irrelevant. OVS as virtio device needs to return the virtio descriptors to the guest driver. That means the virtio queue size sets the limit on the packets in flight in OVS and physical NICs.

I would like to add a statement in the documentation that explains this dependency between Qemu Tx queue size and maximum physical NIC Tx queue size when using the vhost zero copy feature on a port.

> > > > And what about increased packet drop risk due to shortened tx queues?
> > >
> > > I guess this could be an issue. If I had some data to back this up I would
> > include it in the documentation and mention the risk.
> > > If the risk is unacceptable to the user they may choose to not enable the
> > feature. It's disabled by default so shouldn't introduce an issue for
> > > the standard case.
> >
> > Yes, but it would be good to understand the potential drawback for a better
> > judgement of the trade-off between better raw throughput and higher loss
> > risk.
> 
> I ran RFC2544 0% packet loss tests for ZC on & off (64B PVP) and observed the following:
> 
> Max rate (Mpps) with 0% loss
> ZC Off 2599518
> ZC On  1678758
> 
> As you suspected, there is a trade-off. I can mention this in the docs.

That degradation looks severe.
It would be cool if you could re-run the test with a 1K queue size configured in Qemu 2.10 and NIC

Regards, 
Jan


More information about the dev mailing list