[ovs-dev] [PATCH] netdev-dpdk: vhost: Fix txq enabling in the absence of notifications.
Ilya Maximets
i.maximets at samsung.com
Fri Mar 25 06:49:37 UTC 2016
On 24.03.2016 17:34, Flavio Leitner wrote:
> On Thu, 24 Mar 2016 14:55:15 +0300
> Ilya Maximets <i.maximets at samsung.com> wrote:
>
>> According to QEMU documentation (docs/specs/vhost-user.txt) one queue
>> should be enabled initially. More queues are enabled dynamically, by
>> sending message VHOST_USER_SET_VRING_ENABLE.
>>
>> Currently all queues in OVS disabled by default. This breaks above
>> specification. So, queue #0 should be enabled by default to support
>> QEMU versions less than 2.5 and fix probable issues if QEMU will not
>> send VHOST_USER_SET_VRING_ENABLE for queue #0 according to documentation.
>> Also this will fix currently broken vhost-cuse support in OVS.
>>
>> Fixes: 585a5beaa2a4 ("netdev-dpdk: vhost-user: Fix sending packets to
>> queues not enabled by guest.")
>> Reported-by: Mauricio Vasquez B <mauricio.vasquezbernal at studenti.polito.it>
>> Signed-off-by: Ilya Maximets <i.maximets at samsung.com>
>> ---
>> lib/netdev-dpdk.c | 16 ++++++++++++++--
>> 1 file changed, 14 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index f4ed210..ab60f85 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -103,6 +103,9 @@ BUILD_ASSERT_DECL((MAX_NB_MBUF / ROUND_DOWN_POW2(MAX_NB_MBUF/MIN_NB_MBUF))
>> #define NIC_PORT_TX_Q_SIZE 2048 /* Size of Physical NIC TX Queue, Max (n+32<=4096)*/
>>
>> #define OVS_VHOST_MAX_QUEUE_NUM 1024 /* Maximum number of vHost TX queues. */
>> +#define OVS_VHOST_QUEUE_MAP_UNKNOWN (-1) /* Mapping not initialized. */
>
> Please update map initialization at netdev_dpdk_alloc_txq() too:
> [...]
> 676 /* Initialize map for vhost devices. */
> 677 netdev->tx_q[i].map = -1;
> 678 rte_spinlock_init(&netdev->tx_q[i].tx_lock);
OK. Thanks for spotting this.
>
>
>> +#define OVS_VHOST_QUEUE_DISABLED (-2) /* Queue was disabled by guest and not
>> + * yet mapped to another queue. */
>>
>> static char *cuse_dev_name = NULL; /* Character device cuse_dev_name. */
>> static char *vhost_sock_dir = NULL; /* Location of vhost-user sockets */
>> @@ -2019,7 +2022,7 @@ netdev_dpdk_remap_txqs(struct netdev_dpdk *netdev)
>> }
>>
>> if (n_enabled == 0 && total_txqs != 0) {
>> - enabled_queues[0] = -1;
>> + enabled_queues[0] = OVS_VHOST_QUEUE_DISABLED;
>> n_enabled = 1;
>> }
>>
>> @@ -2056,6 +2059,10 @@ netdev_dpdk_vhost_set_queues(struct netdev_dpdk *netdev, struct virtio_net *dev)
>> netdev->real_n_rxq = qp_num;
>> netdev->real_n_txq = qp_num;
>> netdev->txq_needs_locking = true;
>> + /* Enable TX queue 0 by default if it wasn't disabled. */
>> + if (netdev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {
>> + netdev->tx_q[0].map = 0;
>> + }
>>
>> netdev_dpdk_remap_txqs(netdev);
>>
>> @@ -2119,10 +2126,15 @@ destroy_device(volatile struct virtio_net *dev)
>> ovs_mutex_lock(&dpdk_mutex);
>> LIST_FOR_EACH (vhost_dev, list_node, &dpdk_list) {
>> if (netdev_dpdk_get_virtio(vhost_dev) == dev) {
>> + int i;
>>
>> ovs_mutex_lock(&vhost_dev->mutex);
>> dev->flags &= ~VIRTIO_DEV_RUNNING;
>> ovsrcu_set(&vhost_dev->virtio_dev, NULL);
>> + /* Clear mapping. */
>> + for (i = 0; i < vhost_dev->real_n_txq; i++) {
>> + vhost_dev->tx_q[i].map = OVS_VHOST_QUEUE_MAP_UNKNOWN;
>> + }
>
> This could be on a separated function.
OK.
>
>> exists = true;
>> ovs_mutex_unlock(&vhost_dev->mutex);
>> break;
>> @@ -2169,7 +2181,7 @@ vring_state_changed(struct virtio_net *dev, uint16_t queue_id, int enable)
>> if (enable) {
>> vhost_dev->tx_q[qid].map = qid;
>> } else {
>> - vhost_dev->tx_q[qid].map = -1;
>> + vhost_dev->tx_q[qid].map = OVS_VHOST_QUEUE_DISABLED;
>> }
>> netdev_dpdk_remap_txqs(vhost_dev);
>> exists = true;
>
>
I'll send v2 in a moment.
Bets regards, Ilya Maximets.
More information about the dev
mailing list