[ovs-discuss] problem about ovs-dpdk use multiqueue
曹超
cc.cc at alibaba-inc.com
Mon Jun 17 10:03:44 UTC 2019
Hi:
I'm testing ovs-dpdk on Intel 82599. I have a problem about mutilque. The dpdk bind one vf with two queues, The VM config 8 queue. I find only two queue is working in VM.
So,I check the code , I find when vm queue is bigger than dpdk queue, the OVS will use the "static_tx_qid". I want to know why?Why not update tx_qid after. I plan to use a hash function distribute the packets to vm queues.I don't know this way whether have any other problems.
I really appreciate your help, and I look forward to your reply Thank you very much,
NC:
ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 0:
isolated : false
port: dpdk0 queue-id: 1 pmd usage: 22 %
port: vhost-user-1 queue-id: 0 pmd usage: 0 %
port: vhost-user-1 queue-id: 3 pmd usage: 0 %
port: vhost-user-1 queue-id: 4 pmd usage: 0 %
port: vhost-user-1 queue-id: 7 pmd usage: 0 %
pmd thread numa_id 0 core_id 1:
isolated : false
port: dpdk0 queue-id: 0 pmd usage: 43 %
port: vhost-user-1 queue-id: 1 pmd usage: 0 %
port: vhost-user-1 queue-id: 2 pmd usage: 0 %
port: vhost-user-1 queue-id: 5 pmd usage: 0 %
port: vhost-user-1 queue-id: 6 pmd usage: 0 %
ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000129ddcc10b45
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(vhost-user-1): addr:00:00:00:00:00:00
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
2(dpdk0): addr:2a:3b:58:83:e5:64
config: 0
state: 0
current: 10GB-FD AUTO_NEG
speed: 10000 Mbps now, 0 Mbps max
3(snooper0): addr:be:63:2c:41:a2:c0
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:12:9d:dc:c1:0b:45
config: PORT_DOWN
state: LINK_DOWN
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
VM:
27: 151868 0 0 0 0 0 166395562 0 0 0 0 0 PCI-MSI-edge virtio0-input.0
28: 1 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.0
29: 89103 0 0 0 0 0 0 0 173513722 0 0 0 PCI-MSI-edge virtio0-input.1
30: 1 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.1
31: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.2
32: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.2
33: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.3
34: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.3
35: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.4
36: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.4
37: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.5
38: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.5
39: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.6
40: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.6
41: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-input.7
42: 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge virtio0-output.7
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190617/d1c80a88/attachment-0001.html>
More information about the discuss
mailing list