[ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface

chengtcli at qq.com chengtcli at qq.com
Tue Mar 23 03:24:00 UTC 2021


> It is not a user option for any NIC on the OVS-DPDK datapath afaik. The
> number of requested txqs is derived from the number of pmd threads. It
> is pmd threads +1, to give each of them and the main thread a dedicated
> txq. This is why you see 5 txq with 4 pmds.

For dpdkvhostuser port, does it mean that the VM can't receive packets
from more than N queues? where N=pmd_num +1.
If this is the case, VM rx performance could be bad because the VM can
use max of pmd_num +1 cores to receive packets.



chengtcli at qq.com
 
From: Kevin Traynor
Date: 2021-03-09 02:54
To: George Diamantopoulos; ovs-discuss
Subject: Re: [ovs-discuss] ovs-dpdk: can't set n_txq for dpdk interface
On 07/03/2021 03:57, George Diamantopoulos wrote:
> Hello all,
> 
> It appears that setting the n_txq option has no effect for dpdk Interfaces,
> e.g.: "ovs-vsctl set Interface dpdk-eno1 options:n_txq=2".
> 
> n_txq appears to be hardcoded to "5" for my driver (BNX2X PMD), for some
> reason.
> 
 
It is not a user option for any NIC on the OVS-DPDK datapath afaik. The
number of requested txqs is derived from the number of pmd threads. It
is pmd threads +1, to give each of them and the main thread a dedicated
txq. This is why you see 5 txq with 4 pmds.
 
> An additional problem is, the driver won't allow setting n_rxq to a lower
> value than n_txq, and with 5 being hardcoded for txq, it means I can only
> bring the interface up with 5 rxq as well. For 2 ports, that makes 10 PMD
> threads, and I don't want/need to dedicate 10 cores to PMD...
> 
 
This rxq part seems a DPDK PMD driver limitation for this NIC, but it is
not related to num of PMD threads. Num of RxQ and PMD threads are
independent from each other.
 
> I have tried running DPDK's testpmd with this driver, and it successfully
> starts with 1 rxq + 1 txq, so I believe the issue lies with OVS-DPDK.
> 
 
It's more an integration issue. OVS-DPDK sets the txq based on num of
PMD threads, it is only a problem because this driver rejects that
number based on it's limitation which other NICs don't have. As
mentioned on irc, you could contact the driver maintainers about the
limitation.
 
> Indeed, while there is a call of smap_get_int() in lib/netdev-dpdk.c for
> n_rxq, there doesn't seem to be one for n_txq. I tried a quick hack to fix
> this by replicating dpdk_set_rxq_config() for txq, and calling it
> immediately after dpdk_set_rxq_config() is called in the code (it is called
> only once), but naturally that didn't work. Perhaps
> netdev_dpdk_set_tx_multiq() is involved here, but at that point my
> programming skills are beginning to fail me. Even more frustratingly, I
> can't seem to find where the dreaded number 5 is defined for transmit
> queues in the code...
> 
> Are there any known workarounds to this problem? Is it a bug? Thanks!
> 
 
I suggest set n_rxq >= (pmd threads +1), when adding the interface, this
should workaround the driver requirements you've mentioned. In best
case, RSS will actually use each Rxq, in worst case it will be polled by
a PMD thread and there will be no traffic, which won't use too many cycles.
 
> Best regards,
> George
> 
> 
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
 
_______________________________________________
discuss mailing list
discuss at openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20210323/bdc9c24c/attachment.html>


More information about the discuss mailing list