[ovs-dev] dpdk VIRTIO drivier with multiple queues in Openvswitch

amit sehas cun23 at yahoo.com
Thu Aug 23 13:35:50 UTC 2018


Thanks for the response, it helps me a lot, this is my first attempt at using openvswitch and dpdk so I am quite confused right now. I thinkI understand your suggestion about creating 32 vhost ports, earlier I was considering something similar but I was not sure if that wasthe right way to do it ...
Message: 1
Date: Thu, 23 Aug 2018 11:55:08 +0300
From: Ilya Maximets <i.maximets at samsung.com>
To: ovs-dev at openvswitch.org, amit sehas <cun23 at yahoo.com>
Subject: Re: [ovs-dev] dpdk VIRTIO drivier with multiple queues in
    Openvswitch
Message-ID:
    <20180823085358eucas1p2f2ac6caa2027a1751c6932f62a4b28f1~Nd3_sNSG03109831098eucas1p2M at eucas1p2.samsung.com>
    
Content-Type: text/plain; charset="utf-8"

> I have a host running ubuntu 16.04 xenial and several docker containers in it running the same OS image (ubuntu 16.04). I amutilizing openvswitch on the host.  I have 32 queues per port in the application.  I am able to add queues in openvswitch as follows:
> ovs-vsctl set Interface vhost-user4 options:n_rxq=32ovs-vsctl set Interface vhost-user4 options:n_txq=32
> But I am not able to figure out how to add flows that will direct traffic to specific queues. So for eg, traffic should go from queue0 to queue0and from queue30 to queue30 and so on for each of the ports in the switch ?
> 
> has anyone tried to make multiple queues work with VIRTIO utilizing openvswitch?
> the add-flow command in ovs-ofctl doesn't seem to match on input queue number, but it does let you enqueue to an output queue .. also I amnot using qemu and not planning to do so either ...
> any suggests?
> thanks

You're mixing the "hardware" rx/tx queues and the logical queues that usually
used for QoS (rate limiting and so on). ovs-ofctl  works with QoS queues like
this:
    http://docs.openvswitch.org/en/latest/faq/qos/

If you want to direct traffic between "hardware" queues like virtio rings/real
hardware queues of physical NICs, than I'm afraid that it's impossible.
Packets are distributed between rx queues by hardware/virtio based on RSS or
some other algorithms. OVS will use one TX queue per-port for each PMD thread
if possible for performance reasons.

Also, "options:n_rxq=32" and "options:n_txq=32" has no effect for vhost
interfaces. The number of queues will be taken from virtio device (qemu,
virtio-user). OVS has no ability to change that, because it has no control on
memory allocated by QEMU or virtio-user.
Anyway, "options:n_txq" has effect only for dummy interfaces. For all other
types, number of transmit queues controlled by OVS automatically. 

If you want to achieve your goal, you will have to create 32 vhost ports
and configure appropriate OpenFlow rules.

P.S. Looks like documentation about PMD threads messed up and misleading after
    the recent documentation split up. Please avoid looking at it. Or refer
    the docs for OVS 2.9.

Best regards, Ilya Maximets.



More information about the dev mailing list