[ovs-dev] [PATCH 1/3] netdev-dpdk: Fix and document vhost tx retries.

Kevin Traynor ktraynor at redhat.com
Tue Jun 25 09:32:48 UTC 2019


On 24/06/2019 22:40, Flavio Leitner wrote:
> On Fri, Jun 21, 2019 at 02:41:56PM +0100, Kevin Traynor wrote:
>> Fix minor issue of one additional retry and add
>> documentation about vhost tx retries and ways to
>> reduce/remove them.
>>
>> Signed-off-by: Kevin Traynor <ktraynor at redhat.com>
>>
>> ---
>> There is a checkpatch warning that one of the libvirt
>> lines in docs is a few characters too long. It is similar
>> for some others in the file and I think it makes sense to
>> leave it as is but can wrap if required.
>> ---
>>  Documentation/topics/dpdk/vhost-user.rst | 31 ++++++++++++++++++++++++
>>  lib/netdev-dpdk.c                        |  2 +-
>>  2 files changed, 32 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/topics/dpdk/vhost-user.rst b/Documentation/topics/dpdk/vhost-user.rst
>> index f7b4b338e..d1682fd05 100644
>> --- a/Documentation/topics/dpdk/vhost-user.rst
>> +++ b/Documentation/topics/dpdk/vhost-user.rst
>> @@ -76,4 +76,35 @@ mode ports require QEMU version 2.7.  Ports of type vhost-user are currently
>>  deprecated and will be removed in a future release.
>>  
>> +vhost tx retries
>> +~~~~~~~~~~~~~~~~
>> +
>> +When sending a batch of packets (max is 32) to a vhost-user or
> 
> Maybe use the actual define name (NETDEV_MAX_BURST) since it might change?
> 

I agree the doc could go stale if it ever changed. OTOH, I want a user
to be able to read the doc part as a standalone without having to search
the code. I think best to say NETDEV_MAX_BURST and add it's currently
32. If it ever changed in the code, an author or reviewer would find the
comment through a search and be able to update.

>> +vhost-user-client interface, it may happen that some but not all of the packets
>> +in the batch are able to be sent to the guest. This is often because there is
>> +not enough free descriptors in the virtqueue for all the packets to be sent.
>> +In this case there will be a retry, with a default maximum of 8 occurring. If
>> +at any time no packets can be sent, it may mean the guest is not accepting
>> +packets, so there are no (more) retries.
>> +
>> +Tx Retries may be reduced or removed by some external configuration, such
> 
> s/removed/even avoided/ ?
> 

That sounds better, will change it.

>> +as increasing the virtqueue size through the ``rx_queue_size`` parameter in
>> +libvirt::
>> +
>> +  <interface type='vhostuser'>
>> +      <mac address='56:48:4f:53:54:01'/>
>> +      <source type='unix' path='/tmp/dpdkvhostclient0' mode='server'/>
>> +      <model type='virtio'/>
>> +      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024'/>
>> +      <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
>> +  </interface>
> 
> Do we need to worry about the qemu version?
> 

hmm, yeah, it was definitely not always available. I will check the
minimum version and add.

>> +
>> +The guest application will also need need to provide enough descriptors. For
>> +example with ``testpmd`` the command line argument can be used::
>> +
>> + --rxd=1024 --txd=1024
>> +
>> +The guest should also have sufficient cores dedicated for consuming and
>> +processing packets at the required rate.
>> +
>>  .. _dpdk-vhost-user:
>>  
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index 4856c56aa..0f3b9c6f4 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -2353,5 +2353,5 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
>>              break;
>>          }
>> -    } while (cnt && (retries++ <= VHOST_ENQ_RETRY_NUM));
>> +    } while (cnt && (retries++ < VHOST_ENQ_RETRY_NUM));
> 
> You're removing the packet's last hope to reach home, are you sure? :-)

pop-up dialog box coming in v2 :-)

thanks for review,
Kevin.

> fbl
> 
>>  
>>      rte_spinlock_unlock(&dev->tx_q[qid].tx_lock);
>> -- 
>> 2.20.1
>>
>> _______________________________________________
>> dev mailing list
>> dev at openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev



More information about the dev mailing list