[ovs-discuss] [OVS-DPDK] vhost-user with multiple queues does not work

Felix Brucker FBrucker at xantaro.net
Thu Apr 7 07:48:48 UTC 2016


Hi,

to be clear, might be an misunderstanding: like you i can see the queues just fine, most just dont work (aka transfer traffic)

best regards
Felix


-----Ursprüngliche Nachricht-----
Von: discuss [mailto:discuss-bounces at openvswitch.org] Im Auftrag von Felix Brucker
Gesendet: Donnerstag, 7. April 2016 09:44
An: Loftus, Ciara <ciara.loftus at intel.com>
Cc: discuss at openvswitch.org
Betreff: [MASSMAIL] Re: [ovs-discuss] [OVS-DPDK] vhost-user with multiple queues does not work

Hi,

Some outputs:


ubuntu at vm1:~$ ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       8

ubuntu at vm1:~$ ethtool -l eth1
Channel parameters for eth1:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       8

root at vm1:/home/ubuntu# cat /proc/interrupts [...]
 45:     660311          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.0
 46:      24557          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.0
 47:          1     766452          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.1
 48:          1      22304          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.1
 49:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.2
 50:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.2
 51:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.3
 52:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.3
 53:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.4
 54:          1          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.4
 55:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.5
 56:          1          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-output.5
 57:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.6
 58:          1          0          0          0          0          0     222430          0   PCI-MSI-edge      virtio0-output.6
 59:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio0-input.7
 60:          1          0          0          0          0          0          0     683657   PCI-MSI-edge      virtio0-output.7
 61:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-config
 62:     450813          0          0          0          0          0          0     480379   PCI-MSI-edge      virtio1-input.0
 63:       9753          0          0          0          0          0          0    1136367   PCI-MSI-edge      virtio1-output.0
 64:          1     455522          0          0          0          0     935823          0   PCI-MSI-edge      virtio1-input.1
 65:          1      42975          0          0          0          0     741964          0   PCI-MSI-edge      virtio1-output.1
 66:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.2
 67:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.2
 68:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.3
 69:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.3
 70:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.4
 71:          1          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.4
 72:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.5
 73:          1          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.5
 74:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.6
 75:          1          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.6
 76:          0          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-input.7
 77:          3          0          0          0          0          0          0          0   PCI-MSI-edge      virtio1-output.7
[...]

root at vm1:/home/ubuntu# cat setIRQ.sh
#!/bin/bash

service irqbalance stop
echo 0 > /proc/irq/45/smp_affinity_list
echo 0 > /proc/irq/46/smp_affinity_list
echo 1 > /proc/irq/47/smp_affinity_list
echo 1 > /proc/irq/48/smp_affinity_list
echo 2 > /proc/irq/49/smp_affinity_list
echo 2 > /proc/irq/50/smp_affinity_list
echo 3 > /proc/irq/51/smp_affinity_list
echo 3 > /proc/irq/52/smp_affinity_list
echo 4 > /proc/irq/53/smp_affinity_list
echo 4 > /proc/irq/54/smp_affinity_list
echo 5 > /proc/irq/55/smp_affinity_list
echo 5 > /proc/irq/56/smp_affinity_list
echo 6 > /proc/irq/57/smp_affinity_list
echo 6 > /proc/irq/58/smp_affinity_list
echo 7 > /proc/irq/59/smp_affinity_list
echo 7 > /proc/irq/60/smp_affinity_list

echo 7 > /proc/irq/62/smp_affinity_list
echo 7 > /proc/irq/63/smp_affinity_list
echo 6 > /proc/irq/64/smp_affinity_list
echo 6 > /proc/irq/65/smp_affinity_list
echo 5 > /proc/irq/66/smp_affinity_list
echo 5 > /proc/irq/67/smp_affinity_list
echo 4 > /proc/irq/68/smp_affinity_list
echo 4 > /proc/irq/69/smp_affinity_list
echo 3 > /proc/irq/70/smp_affinity_list
echo 3 > /proc/irq/71/smp_affinity_list
echo 2 > /proc/irq/72/smp_affinity_list
echo 2 > /proc/irq/73/smp_affinity_list
echo 1 > /proc/irq/74/smp_affinity_list
echo 1 > /proc/irq/75/smp_affinity_list
echo 0 > /proc/irq/76/smp_affinity_list
echo 0 > /proc/irq/77/smp_affinity_list


root at vm1:/home/ubuntu# htop

  1  [*************************99.3%]     5  [                          0.0%]
  2  [*************************94.7%]     6  [                          0.0%]
  3  [                          0.0%]     7  [***********************  73.7%]
  4  [#                         0.7%]     8  [*************************94.7%]
  Mem[||#*                 67/2001MB]     Tasks: 24, 3 thr; 2 running
  Swp[                         0/0MB]     Load average: 0.37 0.12 0.06
                                          Uptime: 00:08:11

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 1386 root       20   0 24360  1916  1396 R  2.0  0.1  0:00.05 htop
    1 root       20   0 33508  2828  1444 S  0.0  0.1  0:01.07 /sbin/init
  494 root       20   0 19604   912   588 S  0.0  0.0  0:00.08 upstart-udev-brid
  500 root       20   0 49900  1804   952 S  0.0  0.1  0:00.05 /lib/systemd/syst
  649 root       20   0 15256   420   200 S  0.0  0.0  0:00.02 upstart-socket-br
  972 messagebu  20   0 39112  1248   888 S  0.0  0.1  0:00.02 dbus-daemon --sys
 1001 root       20   0 15272   624   388 S  0.0  0.0  0:00.01 upstart-file-brid
 1049 root       20   0 43448  1712  1372 S  0.0  0.1  0:00.00 /lib/systemd/syst
 1062 syslog     20   0  253M  1220   788 S  0.0  0.1  0:00.00 rsyslogd
 1063 syslog     20   0  253M  1220   788 S  0.0  0.1  0:00.00 rsyslogd
 1064 syslog     20   0  253M  1220   788 S  0.0  0.1  0:00.00 rsyslogd
 1060 syslog     20   0  253M  1220   788 S  0.0  0.1  0:00.00 rsyslogd
 1089 root       20   0 14536   936   780 S  0.0  0.0  0:00.00 /sbin/getty -8 38

Best regards
Felix


-----Ursprüngliche Nachricht-----
Von: Loftus, Ciara [mailto:ciara.loftus at intel.com]
Gesendet: Mittwoch, 6. April 2016 18:26
An: Felix Brucker <FBrucker at xantaro.net>
Cc: discuss at openvswitch.org
Betreff: RE: [ovs-discuss] [OVS-DPDK] vhost-user with multiple queues does not work

> 
> i noticed my dpdk devices are configured to have only one queue 
> (default), so i tried with 2,4,8 queues for the dpdk devices themselves.
> when configuring more than one queue on the dpdk device the vm has 
> four queues transferring traffic instead of only two, however if 
> configuring 4 queues on the dpdk devices the vm does not get 8 queues 
> working, it stays at four (that is virtio0->queue0+1 and virtio1->queue0+1).

That looks like one rx and one tx queue per vHost device, so essentially still no multiqueue in the VM.
Can you post the output of ethtool -l <vhost_dev> please.

> My current setup hast wo pmd threads which serve two nics with 4 
> queues each. The dpdk devices are attached to an ovs dpdk enabled 
> bridge each, which has one vhost-user interface each, configured with 
> eight queues, like
> so:
> 
> ovs-appctl dpif-netdev/pmd-rxq-show
> pmd thread numa_id 0 core_id 4:
>         port: dpdk0     queue-id: 0 2
>         port: vhost-user-1      queue-id: 0 2 4 6
>         port: vhost-user-0      queue-id: 0 2 4 6
>         port: dpdk1     queue-id: 0 2
> pmd thread numa_id 0 core_id 5:
>         port: dpdk0     queue-id: 1 3
>         port: vhost-user-1      queue-id: 1 3 5 7
>         port: vhost-user-0      queue-id: 1 3 5 7
>         port: dpdk1     queue-id: 1 3
> 
> ovs-vsctl show
> 88b09698-f11b-4f4c-ab2c-45b455d6a2d1
>     Bridge "br0"
>         Port "br0"
>             Interface "br0"
>                 type: internal
>         Port "dpdk0"
>             Interface "dpdk0"
>                 type: dpdk
>                 options: {n_rxq="4"}
>         Port "vhost-user-0"
>             Interface "vhost-user-0"
>                 type: dpdkvhostuser
>                 options: {n_rxq="8"}
>     Bridge "br1"
>         Port "vhost-user-1"
>             Interface "vhost-user-1"
>                 type: dpdkvhostuser
>                 options: {n_rxq="8"}
>         Port "dpdk1"
>             Interface "dpdk1"
>                 type: dpdk
>                 options: {n_rxq="4"}
>         Port "br1"
>             Interface "br1"
>                 type: internal
> 
> 
> Maybe you or someone else on the list knows why that is?

Hi,

I reproduced your set-up but did not get the same result. I can see 8 queue pairs (8 x rx and 8 x tx) on the guest for each vHost device. Like you said perhaps somebody else on the list might be able to provide more insight.

Thanks,
Ciara

> 
> Best regards
> Felix
> 
> -----Ursprüngliche Nachricht-----
> Von: Felix Brucker
> Gesendet: Dienstag, 5. April 2016 13:07
> An: 'Loftus, Ciara' <ciara.loftus at intel.com>
> Cc: discuss at openvswitch.org
> Betreff: AW: [ovs-discuss] [OVS-DPDK] vhost-user with multiple queues 
> does not work
> 
> Hi Ciara,
> 
> yes i do see them, although i dont need to enter the ethtool command 
> as the ubuntu image or qemu does this automatically it seems (ethtool 
> -l eth0 always shows combined 8 even after boot) for me they look like this:
> 
> Apr  5 12:23:36 dpdk-test ovs-vswitchd[16128]: VHOST_CONFIG: read 
> message VHOST_USER_SET_VRING_ENABLE Apr  5 12:23:36 dpdk-test ovs-
> vswitchd[16128]: VHOST_CONFIG: set queue enable: 1 to qp idx: 0 Apr  5
> 12:23:36 dpdk-test ovs-vswitchd[16128]:
> ovs|00142|dpdk(vhost_thread2)|INFO|State of queue 0 ( tx_qid 0 ) of 
> ovs|00142|vhost
> device '/usr/local/var/run/openvswitch/vhost-user-0' 0 changed to 'enabled'
> Apr  5 12:23:36 dpdk-test ovs-vswitchd[16128]: VHOST_CONFIG: read 
> message VHOST_USER_SET_VRING_ENABLE Apr  5 12:23:36 dpdk-test ovs-
> vswitchd[16128]: VHOST_CONFIG: set queue enable: 1 to qp idx: 1 [...]
> 
> I do see the other queues as well but they dont transport packets
> 
> Best regards
> Felix
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: Loftus, Ciara [mailto:ciara.loftus at intel.com]
> Gesendet: Dienstag, 5. April 2016 12:54
> An: Felix Brucker <FBrucker at xantaro.net>
> Cc: discuss at openvswitch.org
> Betreff: RE: [ovs-discuss] [OVS-DPDK] vhost-user with multiple queues 
> does not work
> 
> >
> > Hi Ciara,
> >
> > ok so it should work!
> > for me the traffic also reaches the vm, but i only see two queues utilized.
> >
> > Configs used:
> >
> > Ovs:
> >
> > ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=30030
> > 	//corresponds to cores 4+5 and their siblings, so 4 pmd threads
> > ovs-vsctl set Interface vhost-user-0 options:n_rxq=8			//8
> > queues for each vhost-user interface ovs-vsctl set Interface
> > vhost-user-1 options:n_rxq=8
> >
> > libvirt:
> >
> > <vcpu placement='static' cpuset='0-3,12-15'>8</vcpu>
> 	//cores 0-3
> > and their siblings, so 8 vcpu
> >   <cputune>
> >     <vcpupin vcpu='0' cpuset='0'/>
> >     <vcpupin vcpu='1' cpuset='1'/>
> >     <vcpupin vcpu='2' cpuset='2'/>
> >     <vcpupin vcpu='3' cpuset='3'/>
> >     <vcpupin vcpu='4' cpuset='12'/>
> >     <vcpupin vcpu='5' cpuset='13'/>
> >     <vcpupin vcpu='6' cpuset='14'/>
> >     <vcpupin vcpu='7' cpuset='15'/>
> >   </cputune>
> > [...]
> > <driver queues='8'/>
> > 	//used on both virtio nics
> >
> >
> > Output of ovs-appctl dpif-netdev/pmd-rxq-show:
> >
> > pmd thread numa_id 0 core_id 16:
> >         port: vhost-user-0      queue-id: 2 6
> >         port: vhost-user-1      queue-id: 0 4
> > pmd thread numa_id 0 core_id 17:
> >         port: vhost-user-0      queue-id: 3 7
> >         port: vhost-user-1      queue-id: 1 5
> > pmd thread numa_id 0 core_id 4:
> >         port: vhost-user-0      queue-id: 0 4
> >         port: dpdk0     queue-id: 0
> >         port: vhost-user-1      queue-id: 2 6
> > pmd thread numa_id 0 core_id 5:
> >         port: vhost-user-0      queue-id: 1 5
> >         port: dpdk1     queue-id: 0
> >         port: vhost-user-1      queue-id: 3 7
> >
> >
> > upon starting traffic and having cleared the stats:
> > ovs-appctl dpif-netdev/pmd-stats-show main thread:
> >         emc hits:0
> >         megaflow hits:0
> >         miss:0
> >         lost:0
> >         polling cycles:953844 (100.00%)
> >         processing cycles:0 (0.00%)
> > pmd thread numa_id 0 core_id 16:
> >         emc hits:11227924
> >         megaflow hits:1
> >         miss:231
> >         lost:0
> >         polling cycles:18074152077 (66.87%)
> >         processing cycles:8955323625 (33.13%)
> >         avg cycles per packet: 2407.29 (27029475702/11228156)
> >         avg processing cycles per packet: 797.58
> > (8955323625/11228156) pmd thread numa_id 0 core_id 17:
> >         emc hits:1774
> >         megaflow hits:37
> >         miss:32
> >         lost:0
> >         polling cycles:20839820676 (99.99%)
> >         processing cycles:1977540 (0.01%)
> >         avg cycles per packet: 11308626.27 (20841798216/1843)
> >         avg processing cycles per packet: 1073.00 (1977540/1843) pmd 
> > thread numa_id 0 core_id 4:
> >         emc hits:11541300
> >         megaflow hits:108
> >         miss:24
> >         lost:0
> >         polling cycles:503392599 (1.55%)
> >         processing cycles:32036341425 (98.45%)
> >         avg cycles per packet: 2819.38 (32539734024/11541432)
> >         avg processing cycles per packet: 2775.77
> > (32036341425/11541432) pmd thread numa_id 0 core_id 5:
> >         emc hits:25307647
> >         megaflow hits:0
> >         miss:226
> >         lost:0
> >         polling cycles:2461529511 (7.57%)
> >         processing cycles:30065889423 (92.43%)
> >         avg cycles per packet: 1285.27 (32527418934/25307873)
> >         avg processing cycles per packet: 1188.01
> > (30065889423/25307873)
> >
> > inside the vm there are only two out of the eight cores utilized.
> > when manually setting the irq affinity inside the guest and 
> > disableing irqbalance like so:
> >
> > echo 0 > /proc/irq/45/smp_affinity_list			//virtio0-
> input.0
> > echo 0 > /proc/irq/46/smp_affinity_list			//virtio0-
> output.0
> > echo 1 > /proc/irq/47/smp_affinity_list			//virtio0-
> input.1
> > echo 1 > /proc/irq/48/smp_affinity_list			//...
> > echo 2 > /proc/irq/49/smp_affinity_list echo 2 > 
> > /proc/irq/50/smp_affinity_list echo 3 > 
> > /proc/irq/51/smp_affinity_list echo 3 > 
> > /proc/irq/52/smp_affinity_list echo 4 > 
> > /proc/irq/53/smp_affinity_list echo 4 > 
> > /proc/irq/54/smp_affinity_list echo 5 > 
> > /proc/irq/55/smp_affinity_list echo 5 > 
> > /proc/irq/56/smp_affinity_list echo 6 > 
> > /proc/irq/57/smp_affinity_list echo 6 > 
> > /proc/irq/58/smp_affinity_list echo 7 > 
> > /proc/irq/59/smp_affinity_list echo 7 > 
> > /proc/irq/60/smp_affinity_list
> >
> > echo 0 > /proc/irq/62/smp_affinity_list			//virtio1-
> input.0
> > echo 0 > /proc/irq/63/smp_affinity_list			//...
> > echo 1 > /proc/irq/64/smp_affinity_list echo 1 > 
> > /proc/irq/65/smp_affinity_list echo 2 > 
> > /proc/irq/66/smp_affinity_list echo 2 > 
> > /proc/irq/67/smp_affinity_list echo 3 > 
> > /proc/irq/68/smp_affinity_list echo 3 > 
> > /proc/irq/69/smp_affinity_list echo 4 > 
> > /proc/irq/70/smp_affinity_list echo 4 > 
> > /proc/irq/71/smp_affinity_list echo 5 > 
> > /proc/irq/72/smp_affinity_list echo 5 > 
> > /proc/irq/73/smp_affinity_list echo 6 > 
> > /proc/irq/74/smp_affinity_list echo 6 > 
> > /proc/irq/75/smp_affinity_list echo 7 > 
> > /proc/irq/76/smp_affinity_list echo 7 > 
> > /proc/irq/77/smp_affinity_list
> >
> > im getting the same result, only two cores used inside the vm.
> > Inside the vm: ifconfig does not return any dropped packets, so it 
> > seems the queues are not used.
> >
> > Cat /proc/interrupts shows only
> > virtio0-input.0
> > virtio0-output.0
> > virtio0-output.1
> > virtio1-output.0
> > virtio1-input.1
> 
> Hi Felix,
> 
> That's strange. I can see:
> virtio0-input.0
> virtio0-output.0
> virtio0-input.1
> virtio0-output.1
> virtio0-input.2
> virtio0-output.2
> virtio0-input.3
> virtio0-output.3
> virtio0-input.4
> virtio0-output.4
> virtio0-input.5
> virtio0-output.5
> virtio0-input.6
> virtio0-output.6
> virtio0-input.7
> virtio0-output.7
> 
> Do you get the following logs or similar when you set " ethtool -L
> eth0 combined 8" in the VM?:
> 2016-04-05T03:40:54Z|00010|dpdk(vhost_thread2)|INFO|State of queue 0 ( 
> tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'
> 0 changed to 'enabled'
> 2016-04-05T03:40:54Z|00011|dpdk(vhost_thread2)|INFO|State of queue 2 ( 
> tx_qid 1 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'
> 0 changed to 'enabled'
> ....
> 2016-04-05T03:40:54Z|00017|dpdk(vhost_thread2)|INFO|State of queue 14 
> ( tx_qid 7 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'
> 0 changed to 'enabled'
> 
> Thanks,
> Ciara
> 
> >
> > generating interrupts, all other queues have generated zero or one 
> > interrupt since start.
> >
> >
> > Best regards
> > Felix
> >
> >
> > -----Ursprüngliche Nachricht-----
> > Von: Loftus, Ciara [mailto:ciara.loftus at intel.com]
> > Gesendet: Dienstag, 5. April 2016 12:14
> > An: Felix Brucker <FBrucker at xantaro.net>
> > Betreff: RE: [ovs-discuss] [OVS-DPDK] vhost-user with multiple 
> > queues does not work
> >
> > Hi Felix,
> >
> > It should work with as many pmd threads. I tested 8 queues with
> > 1,2,4 and 8 pmd threads and each time traffic was able to reach the VM.
> >
> > For example with 1 PMD thread
> > # sudo ./utilities/ovs-vsctl set Open_vSwitch . 
> > other_config:pmd-cpu-
> > mask=1 # sudo ./utilities/ovs-appctl dpif-netdev/pmd-rxq-show pmd 
> > thread numa_id 0 core_id 0:
> >         port: dpdkvhostuser0    queue-id: 0 1 2 3 4 5 6 7
> >         port: dpdk0     queue-id: 0
> >
> > You will get better performance if you assign a core to each queue though.
> >
> > Thanks,
> > Ciara
> >
> > > -----Original Message-----
> > > From: Felix Brucker [mailto:FBrucker at xantaro.net]
> > > Sent: Tuesday, April 05, 2016 11:06 AM
> > > To: Loftus, Ciara <ciara.loftus at intel.com>
> > > Subject: AW: [ovs-discuss] [OVS-DPDK] vhost-user with multiple 
> > > queues does not work
> > >
> > > Hi Ciara,
> > >
> > > thanks, one question though:
> > >
> > > you wrote:
> > >
> > > If you want a pmd to service each of the 8 queues, you can set the 
> > > number of PMD threads via:
> > > ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=FF This 
> > > will set up
> > > 8 pmd threads, on cores 0-7.
> > >
> > > Does this mean i have to use as many host pmd threads as i want to 
> > > use queues for the vhost-user?
> > > So using 2 host pmd threads and 8 vhost-user queues does not work?
> > >
> > > Best regards
> > > Felix
> > >
> > >
> > > -----Ursprüngliche Nachricht-----
> > > Von: Loftus, Ciara [mailto:ciara.loftus at intel.com]
> > > Gesendet: Dienstag, 5. April 2016 12:00
> > > An: Felix Brucker <FBrucker at xantaro.net>; Christian Ehrhardt 
> > > <christian.ehrhardt at canonical.com>
> > > Cc: discuss at openvswitch.org
> > > Betreff: RE: [ovs-discuss] [OVS-DPDK] vhost-user with multiple 
> > > queues does not work
> > >
> > > >
> > > > Hi,
> > > >
> > > > with the branch-2.5 ovs and n-dpdk-rxqs option i was able to get 
> > > > four queues with four pmd host threads, which currently is my 
> > > > limit for vm queues in terms of usable cores and the command itself.
> > > >
> > > > To my understanding, with ovs post 2.5 (latest git master) i 
> > > > should be able to use two n-dpdk-rxqs queues for the host and 
> > > > use ovs-vsctl set Interface
> > > > vhost-user-0 options:n_rxq=8 to get eight queues inside the vm, 
> > > > is this correct?
> > >
> > > Hi Felix,
> > >
> > > Hopefully my explanation below will help clear things up.
> > >
> > > Post-2.5 the 'n-dpdk-rxqs' option is not available to use any more.
> > > Here is a snippet from the commit message that removes this option 
> > > (commit id
> > > a14b8947fd13d4c587addbffd24eedc7bb48ee2b)
> > >
> > > "dpif-netdev: Allow different numbers of rx queues for different ports.
> > >
> > > Currently, all of the PMD netdevs can only have the same number of 
> > > rx queues, which is specified in other_config:n-dpdk-rxqs.
> > >
> > > Fix that by introducing of new option for PMD interfaces: 'n_rxq', 
> > > which specifies the maximum number of rx queues to be created for 
> > > this
> > interface.
> > >
> > > Example:
> > >     ovs-vsctl set Interface dpdk0 options:n_rxq=8
> > >
> > > Old 'other_config:n-dpdk-rxqs' deleted."
> > >
> > > In your case, now on latest master, if you want 8 queues assigned 
> > > to
> > > vhost-
> > > user-0 in the guest you need to do the following:
> > >
> > > On the host:
> > > 1. ovs-vsctl set Interface vhost-user-0 options:n_rxq=8 2. QEMU:
> > > -chardev
> > > socket,id=char0,path=/path/to/vhost-user-0 -netdev type=vhost-
> > > user,id=mynet1,chardev=char0,vhostforce,queues=8  -device
> > > virtio-net-
> > >
> >
> pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on,vectors=1
> > > 8
> > > Or if you're using libvirt I think the equivalent would be: 
> > > <driver queues='8'/>
> > >
> > > On the VM:
> > > 3. Check queues available:
> > > [root at localhost ~]# ethtool -l eth0 Channel parameters for eth0:
> > > Pre-set maximums:
> > > RX:             0
> > > TX:             0
> > > Other:          0
> > > Combined:       8
> > > Current hardware settings:
> > > RX:             0
> > > TX:             0
> > > Other:          0
> > > Combined:       1
> > > # Enable 8 queues
> > > [root at localhost ~]# ethtool -l eth0 Channel parameters for eth0:
> > > Pre-set maximums:
> > > RX:             0
> > > TX:             0
> > > Other:          0
> > > Combined:       8
> > > Current hardware settings:
> > > RX:             0
> > > TX:             0
> > > Other:          0
> > > Combined:       8
> > >
> > > At this point your vhost-user-0 interface (eth0) on the guest can 
> > > use
> > > 8 queues.
> > >
> > > If you want a pmd to service each of the 8 queues, you can set the 
> > > number of PMD threads via:
> > > ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=FF This 
> > > will set up
> > > 8 pmd threads, on cores 0-7.
> > >
> > > If you have other interfaces that you want to increase the number 
> > > of rxqs for, you may do so like so:
> > > ovs-vsctl set Interface <iface> options:n_rxq=X
> > >
> > > Thanks,
> > > Ciara
> > >
> > > >
> > > > If so, im experiencing a problem were only two queues out of the 
> > > > eight are used for traffic.
> > > >
> > > > Best regards
> > > > Felix
> > > >
> > > >
> > > > -----Ursprüngliche Nachricht-----
> > > > Von: discuss [mailto:discuss-bounces at openvswitch.org] Im Auftrag 
> > > > von Felix Brucker
> > > > Gesendet: Dienstag, 5. April 2016 09:48
> > > > An: Loftus, Ciara <ciara.loftus at intel.com>; Christian Ehrhardt 
> > > > <christian.ehrhardt at canonical.com>
> > > > Cc: Daniele Di Proietto <diproiettod at vmware.com>; 
> > > > discuss at openvswitch.org
> > > > Betreff: [MASSMAIL] Re: [ovs-discuss] [OVS-DPDK] vhost-user with 
> > > > multiple queues does not work
> > > >
> > > > Hi Ciara,
> > > >
> > > > thanks that clarified it, i got confused by > Also this does NOT 
> > > > set the multiqueues the guest shall get i read the Install md 
> > > > from here 
> > > > http://openvswitch.org/support/dist-docs/INSTALL.DPDK.md.txt
> > > > i thought this is related to the download on the same site 
> > > > (http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz), but 
> > > > it seems not to.
> > > > With the n-dpdk-rxqs=2 option i was able to get 2 queues inside 
> > > > the vm and working communication, too.
> > > > After testing i will try to get the latest (post 2.5) version of 
> > > > ovs to get a more fine grained control over the queues.
> > > > Thanks all!
> > > >
> > > > Best regards
> > > > Felix
> > > >
> > > > -----Ursprüngliche Nachricht-----
> > > > Von: Loftus, Ciara [mailto:ciara.loftus at intel.com]
> > > > Gesendet: Montag, 4. April 2016 18:11
> > > > An: Felix Brucker <FBrucker at xantaro.net>; Christian Ehrhardt 
> > > > <christian.ehrhardt at canonical.com>
> > > > Cc: Daniele Di Proietto <diproiettod at vmware.com>; 
> > > > discuss at openvswitch.org
> > > > Betreff: RE: [ovs-discuss] [OVS-DPDK] vhost-user with multiple 
> > > > queues does not work
> > > >
> > > > > yes that part works, but for communication to work between the 
> > > > > guest and host OVS has to use 2 queues as well, which 
> > > > > currently does
> > not work.
> > > > > So how does one set multiple queues for vhostuser in OVS 2.5.0 
> > > > > or
> > > below?
> > > > > Im not talking about libvirt or qemu regarding the above 
> > > > > question, but
> > > OVS.
> > > >
> > > > Hi Felix,
> > > >
> > > > As we've mentioned before, you need to use the following command:
> > > >
> > > > ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=2
> > > >
> > > > ... to assign two rx queues to the vhost-user ports in OVS.
> > > >
> > > > This is clearly stated in INSTALL.DPDK.md on the 2.5 branch. I 
> > > > suspect you were previously looking at the latest INSTALL guide 
> > > > which pointed you to use the n_rxq option, which is not 
> > > > available on the
> > older branch-2.5.
> > > >
> > > > Essentially, if your bridge has two vhost-user ports eg.
> > > > vhost-user-0 and vhost-user-1, the effect of ' ovs-vsctl set 
> > > > Open_vSwitch
> > .
> > > > other_config:n- dpdk-rxqs=2'
> > > > is the same as
> > > > ovs-vsctl set Interface vhost-user-0 options:n_rxq=2 ovs-vsctl 
> > > > set Interface
> > > > vhost-user-1 options:n_rxq=2
> > > >
> > > > On branch-2.5, you need to use the former command.
> > > >
> > > > Thanks,
> > > > Ciara
> > > >
> > > > >
> > > > > Grüße
> > > > > Felix
> > > > >
> > > > > Von: Christian Ehrhardt
> > > > > [mailto:christian.ehrhardt at canonical.com]
> > > > > Gesendet: Montag, 4. April 2016 17:35
> > > > > An: Felix Brucker <FBrucker at xantaro.net>
> > > > > Cc: Daniele Di Proietto <diproiettod at vmware.com>; Loftus, 
> > > > > Ciara <ciara.loftus at intel.com>; discuss at openvswitch.org
> > > > > Betreff: Re: [ovs-discuss] [OVS-DPDK] vhost-user with multiple 
> > > > > queues does not work
> > > > >
> > > > > Hi Felix,
> > > > > here you already do the right thing:
> > > > >
> > > > >     <interface type='vhostuser'> [...]
> > > > >       <driver queues='2'/>
> > > > >
> > > > > Given you have the recent libvirt and qemu versions that 
> > > > > translates to the right qemu parameters as you have seen in my 
> > > > > initial
> > posts.
> > > > >
> > > > > You can then log into the guest and check with "ethtool -l" if 
> > > > > the guest really "sees" its multiple queues (also shown in my 
> > > > > first mail that this fails for me)
> > > > >
> > > > > Kind Regards,
> > > > > Christian
> > > > _______________________________________________
> > > > discuss mailing list
> > > > discuss at openvswitch.org
> > > > http://openvswitch.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss at openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss


More information about the discuss mailing list