[ovs-discuss] Cannot connect Openstack VM to OVS in DPDK mode (vhost-user client)

José Miguel Guzmán jmguzman at whitestack.com
Mon May 28 03:12:49 UTC 2018


Hi

I am attaching more context from the log, as I realized a *MBUF**: error
setting mempool handler* message (0188):

2018-05-26T22:07:08.658Z|00180|dpif_netdev|INFO|PMD thread on numa_id: 0,
core id:  0 created.
2018-05-26T22:07:08.662Z|00181|dpif_netdev|INFO|PMD thread on numa_id: 1,
core id:  4 created.
2018-05-26T22:07:08.663Z|00182|dpif_netdev|INFO|There are 1 pmd threads on
numa node 0
2018-05-26T22:07:08.663Z|00183|dpif_netdev|INFO|There are 1 pmd threads on
numa node 1
2018-05-26T22:07:08.665Z|00184|dpdk|INFO|VHOST_CONFIG: vhost-user client:
socket created, fd: 54
2018-05-26T22:07:08.665Z|00185|netdev_dpdk|INFO|vHost User device
'vhud402d58e-c9' created in 'client' mode, using client socket
'/var/run/openvswitch/vhud402d58e-c9'
2018-05-26T22:07:08.668Z|00186|dpdk|WARN|VHOST_CONFIG: failed to connect to
/var/run/openvswitch/vhud402d58e-c9: No such file or directory
2018-05-26T22:07:08.668Z|00187|dpdk|INFO|VHOST_CONFIG:
/var/run/openvswitch/vhud402d58e-c9: reconnecting...
2018-05-26T22:07:08.669Z|00188|dpdk|ERR|MBUF: error setting mempool handler
2018-05-26T22:07:08.669Z|00189|netdev_dpdk|ERR|Failed to create memory pool
for netdev vhud402d58e-c9, with MTU 1500 on socket 0: Invalid argument
2018-05-26T22:07:08.669Z|00190|dpif_netdev|ERR|Failed to set interface
vhud402d58e-c9 new configuration
2018-05-26T22:07:08.669Z|00191|bridge|WARN|could not add network device
vhud402d58e-c9 to ofproto (No such device)
2018-05-26T22:07:08.669Z|00192|dpdk|WARN|EAL: Requested device 0000:05:00.0
cannot be used
2018-05-26T22:07:08.669Z|00193|dpdk|ERR|EAL: Driver cannot attach the
device (0000:05:00.0)
2018-05-26T22:07:08.669Z|00194|netdev_dpdk|WARN|Error attaching device
'0000:05:00.0' to DPDK
2018-05-26T22:07:08.669Z|00195|netdev|WARN|enp5s0f0: could not set
configuration (Invalid argument)
2018-05-26T22:07:08.855Z|00196|bridge|INFO|bridge br-int: added interface
vhud402d58e-c9 on port 3
2018-05-26T22:07:08.855Z|00197|dpdk|WARN|EAL: Requested device 0000:05:00.0
cannot be used
2018-05-26T22:07:08.855Z|00198|dpdk|ERR|EAL: Driver cannot attach the
device (0000:05:00.0)
2018-05-26T22:07:08.855Z|00199|netdev_dpdk|WARN|Error attaching device
'0000:05:00.0' to DPDK
2018-05-26T22:07:08.855Z|00200|netdev|WARN|enp5s0f0: could not set
configuration (Invalid argument)
2018-05-26T22:07:09.671Z|00001|dpdk|INFO|VHOST_CONFIG:
/var/run/openvswitch/vhud402d58e-c9: connected
2018-05-26T22:07:09.671Z|00002|dpdk|INFO|VHOST_CONFIG: new device, handle
is 0
2018-05-26T22:07:09.678Z|00001|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_GET_FEATURES
2018-05-26T22:07:09.678Z|00002|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_GET_PROTOCOL_FEATURES
2018-05-26T22:07:09.678Z|00003|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_PROTOCOL_FEATURES
2018-05-26T22:07:09.678Z|00004|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_GET_QUEUE_NUM
2018-05-26T22:07:09.678Z|00005|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_OWNER
2018-05-26T22:07:09.678Z|00006|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_GET_FEATURES
2018-05-26T22:07:09.678Z|00007|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_CALL
2018-05-26T22:07:09.678Z|00008|dpdk|INFO|VHOST_CONFIG: vring call idx:0
file:59
2018-05-26T22:07:09.678Z|00009|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_CALL
2018-05-26T22:07:09.678Z|00010|dpdk|INFO|VHOST_CONFIG: vring call idx:1
file:60
2018-05-26T22:07:11.798Z|00011|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_ENABLE
2018-05-26T22:07:11.798Z|00012|dpdk|INFO|VHOST_CONFIG: set queue enable: 1
to qp idx: 0
2018-05-26T22:07:11.798Z|00013|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0
) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to 'enabled'
2018-05-26T22:07:11.798Z|00014|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_ENABLE
2018-05-26T22:07:11.798Z|00015|dpdk|INFO|VHOST_CONFIG: set queue enable: 1
to qp idx: 1
2018-05-26T22:07:11.798Z|00016|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_ENABLE
2018-05-26T22:07:11.798Z|00017|dpdk|INFO|VHOST_CONFIG: set queue enable: 1
to qp idx: 0
2018-05-26T22:07:11.798Z|00018|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0
) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to 'enabled'
2018-05-26T22:07:11.798Z|00019|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_VRING_ENABLE
2018-05-26T22:07:11.798Z|00020|dpdk|INFO|VHOST_CONFIG: set queue enable: 1
to qp idx: 1
2018-05-26T22:07:11.799Z|00021|dpdk|INFO|VHOST_CONFIG: read message
VHOST_USER_SET_FEATURES
As you can see, I am also having issues attaching a DPDK interface in a
ovs-bridge.. I wonder if both issues are related?

Any help please!!!

Thanks a lot, I would be very grateful if you can help me.. I have tried
everything in my reach.. with no success.
JM

2018-05-26 18:43 GMT-04:00 José Miguel Guzmán <jmguzman at whitestack.com>:

>
> Hi
> I am having issues trying to connect a VM in Openstack with OVS in DPDK
> mode (dpdkvhostuserclient)
>
> Apparently, the vhostuser client port, is not working in OVS, for some
> memory issues
> 2018-05-26T22:07:08.669Z|00189|netdev_dpdk|ERR|Failed to create memory
> pool for netdev vhud402d58e-c9, with MTU 1500 on socket 0: Invalid
> argument
> 2018-05-26T22:07:08.669Z|00190|dpif_netdev|ERR|Failed to set interface
> vhud402d58e-c9 new configuration
>
> ovsdb-server and ovs-vsswitchd are running in docker containters, but this
> should not be the problem, because those containters work in kernel mode.
> the problem is when using netdev and vhost-user (client)
>
> Openstack was configured for using netdev
> [OVS]
> datapath_type = netdev
> vhostuser_socket_dir = /var/run/openvswitch
>
> Bridge is created
>     Bridge br-int
>         Controller "tcp:127.0.0.1:6633"
>             is_connected: true
>         fail_mode: secure
>         Port "vhud402d58e-c9"
>             tag: 1
>             Interface "vhud402d58e-c9"
>                 type: dpdkvhostuserclient
>                 options: {vhost-server-path="/var/run/
> openvswitch/vhud402d58e-c9"}
>         Port int-dpdk_bridge
>             Interface int-dpdk_bridge
>                 type: patch
>                 options: {peer=phy-dpdk_bridge}
>         Port patch-tun
>             Interface patch-tun
>                 type: patch
>                 options: {peer=patch-int}
>         Port br-int
>             Interface br-int
>                 type: internal
>
> with netdev
> _uuid               : d031a04b-e392-4fd5-9b14-68ff04289976
> auto_attach         : []
> controller          : [64f24e69-c84b-4ab9-b1b0-d792968b7f76]
> datapath_id         : "00004aa031d0d54f"
> datapath_type       : netdev
> datapath_version    : "<built-in>"
> external_ids        : {}
> fail_mode           : secure
> flood_vlans         : []
> flow_tables         : {}
> ipfix               : []
> mcast_snooping_enable: false
> mirrors             : []
> name                : br-int
> netflow             : []
> other_config        : {}
> ports               : [730ecb57-d985-4e6e-9e4c-1d33c355672f,
> 745d44c2-70af-49d3-b3e8-72d4acc3410d, 97cc5449-8a47-416e-8f35-0dbf839109a8,
> fff53944-6891-438c-80e8-aceb940601f0]
> protocols           : ["OpenFlow10", "OpenFlow13"]
> rstp_enable         : false
> rstp_status         : {}
> sflow               : []
> status              : {}
> stp_enable          : false
>
> and port is dpdkvhostuserclient
> (ovsdpdk-db)[root at s131002 /]# ovs-vsctl list Interface
> _uuid               : 6bde0358-3cb1-4648-958d-cac81dc683b4
> admin_state         : up
> bfd                 : {}
> bfd_status          : {}
> cfm_fault           : []
> cfm_fault_status    : []
> cfm_flap_count      : []
> cfm_health          : []
> cfm_mpid            : []
> cfm_remote_mpids    : []
> cfm_remote_opstate  : []
> duplex              : []
> error               : []
> external_ids        : {attached-mac="fa:16:3e:ba:d0:e1", iface-id=
> "d402d58e-c9c7-4b65-94e0-2b38c3cfd926", iface-status=active, vm-uuid=
> "c7946cf9-6529-446a-9eb3-d2cfefbfddd7"}
> ifindex             : 7907161
> ingress_policing_burst: 0
> ingress_policing_rate: 0
> lacp_current        : []
> link_resets         : 0
> link_speed          : []
> link_state          : down
> lldp                : {}
> mac                 : []
> mac_in_use          : "00:00:00:00:00:00"
> mtu                 : 0
> mtu_request         : 1500
> name                : "vhud402d58e-c9"
> ofport              : 3
> ofport_request      : []
> options             : {vhost-server-path="/var/run/
> openvswitch/vhud402d58e-c9"}
> other_config        : {}
> statistics          : {"rx_1024_to_1522_packets"=0,
> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0,
> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0,
> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0,
> rx_dropped=0, rx_errors=0, rx_packets=0, tx_bytes=0, tx_dropped=0,
> tx_packets=0}
> status              : {}
> type                : dpdkvhostuserclient
>
>
> The virtual machine is running in qemu with this arguments
> -netdev vhost-user,chardev=charnet0,id=hostnet0
> -chardev socket,id=charnet0,path=/var/run/openvswitch/vhud402d58e-
> c9,server
> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:
> ba:d0:e1,bus=pci.0,addr=0x3
>
> The socket is created
> srwxrwxr-x 1 42436 42436 0 May 26 17:07 /var/run/openvswitch/
> vhud402d58e-c9
>
> But despite the port is attached to the switch, I see some errors in the
> log
> 2018-05-26T22:07:08.665Z|00185|netdev_dpdk|INFO|vHost User device
> 'vhud402d58e-c9' created in 'client' mode, using client socket
> '/var/run/openvswitch/vhud402d58e-c9'
> 2018-05-26T22:07:08.668Z|00186|dpdk|WARN|VHOST_CONFIG: failed to connect
> to /var/run/openvswitch/vhud402d58e-c9: No such file or directory
> 2018-05-26T22:07:08.668Z|00187|dpdk|INFO|VHOST_CONFIG:
> /var/run/openvswitch/vhud402d58e-c9: reconnecting...
> 2018-05-26T22:07:08.669Z|00189|netdev_dpdk|ERR|Failed to create memory
> pool for netdev vhud402d58e-c9, with MTU 1500 on socket 0: Invalid
> argument
> 2018-05-26T22:07:08.669Z|00190|dpif_netdev|ERR|Failed to set interface
> vhud402d58e-c9 new configuration
> 2018-05-26T22:07:08.669Z|00191|bridge|WARN|could not add network device
> vhud402d58e-c9 to ofproto (No such device)
> 2018-05-26T22:07:08.855Z|00196|bridge|INFO|bridge br-int: added interface
> vhud402d58e-c9 on port 3
> 2018-05-26T22:07:09.671Z|00001|dpdk|INFO|VHOST_CONFIG:
> /var/run/openvswitch/vhud402d58e-c9: connected
> 2018-05-26T22:07:11.798Z|00013|netdev_dpdk|INFO|State of queue 0 ( tx_qid
> 0 ) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to
> 'enabled'
> 2018-05-26T22:07:11.798Z|00018|netdev_dpdk|INFO|State of queue 0 ( tx_qid
> 0 ) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to
> 'enabled'
>
> Nova logs does not evidence errors
> 2018-05-26 17:07:09.779 7 INFO nova.virt.libvirt.driver [req-4f6cbb38-652d
> -4a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
> ca4f4f14e4c6448d94e34064dfd12eaf - default default] [instance: c7946cf9
> -6529-446a-9eb3-d2cfefbfddd7] Instance rebooted successfully.
> 2018-05-26 17:07:09.888 7 INFO nova.compute.manager [req-4f6cbb38-652d-4
> a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
> ca4f4f14e4c6448d94e34064dfd12eaf - default default] [instance: c7946cf9
> -6529-446a-9eb3-d2cfefbfddd7] VM Started (Lifecycle Event)
> 2018-05-26 17:07:28.155 7 INFO nova.compute.resource_tracker
> [req-4f6cbb38-652d-4a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
> ca4f4f14e4c6448d94e34064dfd12eaf - default default] Final resource view:
> name=s131002.nocix.net phys_ram=24164MB used_ram=7168MB phys_disk=250GB
> used_disk=61GB total_vcpus=16 used_vcpus=4 pci_stats=[]
>
>
> Do you have any clue about what could be happening??
>
> I would really appreciate any guidance on this topic..
>
> Thanks
> JM
>
>
>
>
>


-- 

*José Miguel Guzmán*Senior Network Consultant
Latin America & Caribbean
  +1 (650) 248-2490 <+16502482490>
  +56 (9) 9064-2780 <+56990642780>

  jmguzman at whitestack.com

  jmguzmanc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180527/3e20d6c7/attachment-0001.html>


More information about the discuss mailing list