[ovs-discuss] Cannot connect Openstack VM to OVS in DPDK mode (vhost-user client)
José Miguel Guzmán
jmguzman at whitestack.com
Sat May 26 22:43:18 UTC 2018
Hi
I am having issues trying to connect a VM in Openstack with OVS in DPDK
mode (dpdkvhostuserclient)
Apparently, the vhostuser client port, is not working in OVS, for some
memory issues
2018-05-26T22:07:08.669Z|00189|netdev_dpdk|ERR|Failed to create memory pool
for netdev vhud402d58e-c9, with MTU 1500 on socket 0: Invalid argument
2018-05-26T22:07:08.669Z|00190|dpif_netdev|ERR|Failed to set interface
vhud402d58e-c9 new configuration
ovsdb-server and ovs-vsswitchd are running in docker containters, but this
should not be the problem, because those containters work in kernel mode.
the problem is when using netdev and vhost-user (client)
Openstack was configured for using netdev
[OVS]
datapath_type = netdev
vhostuser_socket_dir = /var/run/openvswitch
Bridge is created
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vhud402d58e-c9"
tag: 1
Interface "vhud402d58e-c9"
type: dpdkvhostuserclient
options: {vhost-server-path=
"/var/run/openvswitch/vhud402d58e-c9"}
Port int-dpdk_bridge
Interface int-dpdk_bridge
type: patch
options: {peer=phy-dpdk_bridge}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
with netdev
_uuid : d031a04b-e392-4fd5-9b14-68ff04289976
auto_attach : []
controller : [64f24e69-c84b-4ab9-b1b0-d792968b7f76]
datapath_id : "00004aa031d0d54f"
datapath_type : netdev
datapath_version : "<built-in>"
external_ids : {}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-int
netflow : []
other_config : {}
ports : [730ecb57-d985-4e6e-9e4c-1d33c355672f,
745d44c2-70af-49d3-b3e8-72d4acc3410d, 97cc5449-8a47-416e-8f35-0dbf839109a8,
fff53944-6891-438c-80e8-aceb940601f0]
protocols : ["OpenFlow10", "OpenFlow13"]
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
and port is dpdkvhostuserclient
(ovsdpdk-db)[root at s131002 /]# ovs-vsctl list Interface
_uuid : 6bde0358-3cb1-4648-958d-cac81dc683b4
admin_state : up
bfd : {}
bfd_status : {}
cfm_fault : []
cfm_fault_status : []
cfm_flap_count : []
cfm_health : []
cfm_mpid : []
cfm_remote_mpids : []
cfm_remote_opstate : []
duplex : []
error : []
external_ids : {attached-mac="fa:16:3e:ba:d0:e1", iface-id=
"d402d58e-c9c7-4b65-94e0-2b38c3cfd926", iface-status=active, vm-uuid=
"c7946cf9-6529-446a-9eb3-d2cfefbfddd7"}
ifindex : 7907161
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current : []
link_resets : 0
link_speed : []
link_state : down
lldp : {}
mac : []
mac_in_use : "00:00:00:00:00:00"
mtu : 0
mtu_request : 1500
name : "vhud402d58e-c9"
ofport : 3
ofport_request : []
options : {vhost-server-path=
"/var/run/openvswitch/vhud402d58e-c9"}
other_config : {}
statistics : {"rx_1024_to_1522_packets"=0, "rx_128_to_255_packets"=0,
"rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0,
"rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0,
rx_dropped=0, rx_errors=0, rx_packets=0, tx_bytes=0, tx_dropped=0,
tx_packets=0}
status : {}
type : dpdkvhostuserclient
The virtual machine is running in qemu with this arguments
-netdev vhost-user,chardev=charnet0,id=hostnet0
-chardev socket,id=charnet0,path=/var/run/openvswitch/vhud402d58e-c9,server
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ba:d0:e1,bus=pci
.0,addr=0x3
The socket is created
srwxrwxr-x 1 42436 42436 0 May 26 17:07 /var/run/openvswitch/vhud402d58e-c9
But despite the port is attached to the switch, I see some errors in the log
2018-05-26T22:07:08.665Z|00185|netdev_dpdk|INFO|vHost User device
'vhud402d58e-c9' created in 'client' mode, using client socket
'/var/run/openvswitch/vhud402d58e-c9'
2018-05-26T22:07:08.668Z|00186|dpdk|WARN|VHOST_CONFIG: failed to connect to
/var/run/openvswitch/vhud402d58e-c9: No such file or directory
2018-05-26T22:07:08.668Z|00187|dpdk|INFO|VHOST_CONFIG:
/var/run/openvswitch/vhud402d58e-c9: reconnecting...
2018-05-26T22:07:08.669Z|00189|netdev_dpdk|ERR|Failed to create memory pool
for netdev vhud402d58e-c9, with MTU 1500 on socket 0: Invalid argument
2018-05-26T22:07:08.669Z|00190|dpif_netdev|ERR|Failed to set interface
vhud402d58e-c9 new configuration
2018-05-26T22:07:08.669Z|00191|bridge|WARN|could not add network device
vhud402d58e-c9 to ofproto (No such device)
2018-05-26T22:07:08.855Z|00196|bridge|INFO|bridge br-int: added interface
vhud402d58e-c9 on port 3
2018-05-26T22:07:09.671Z|00001|dpdk|INFO|VHOST_CONFIG:
/var/run/openvswitch/vhud402d58e-c9: connected
2018-05-26T22:07:11.798Z|00013|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0
) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to 'enabled'
2018-05-26T22:07:11.798Z|00018|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0
) of vhost device '/var/run/openvswitch/vhud402d58e-c9'changed to 'enabled'
Nova logs does not evidence errors
2018-05-26 17:07:09.779 7 INFO nova.virt.libvirt.driver [req-4f6cbb38-652d-4
a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
ca4f4f14e4c6448d94e34064dfd12eaf - default default] [instance: c7946cf9-6529
-446a-9eb3-d2cfefbfddd7] Instance rebooted successfully.
2018-05-26 17:07:09.888 7 INFO nova.compute.manager [req-4f6cbb38-652d-4
a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
ca4f4f14e4c6448d94e34064dfd12eaf - default default] [instance: c7946cf9-6529
-446a-9eb3-d2cfefbfddd7] VM Started (Lifecycle Event)
2018-05-26 17:07:28.155 7 INFO nova.compute.resource_tracker
[req-4f6cbb38-652d-4a6b-a205-9a8bf1096bfc f2ef1bf11ce8480980301a68fbaad0ac
ca4f4f14e4c6448d94e34064dfd12eaf - default default] Final resource view:
name=s131002.nocix.net phys_ram=24164MB used_ram=7168MB phys_disk=250GB
used_disk=61GB total_vcpus=16 used_vcpus=4 pci_stats=[]
Do you have any clue about what could be happening??
I would really appreciate any guidance on this topic..
Thanks
JM
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180526/09c40e92/attachment-0001.html>
More information about the discuss
mailing list