[ovs-discuss] OVS cannot create a vhost_user socket at /var/run/openvswitch/vhost-user-1

Kavanagh, Mark B mark.b.kavanagh at intel.com
Fri May 27 09:09:23 UTC 2016


>
>Hi!
>
>I try to install and use OVS with DPDK on Ubuntu 16.04 following this guide:
>https://help.ubuntu.com/16.04/serverguide/DPDK.html
>
>On a Cisco UCS C240 with two physical CPUs (18 Cores each) I have two Intel X520-DA2
>Cards, which is recognized and show properly:
>root at caesar:/home/cisco# dpdk_nic_bind --status
>Network devices using DPDK-compatible driver
>============================================
>0000:8f:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci
>unused=ixgbe     <- looks good, vfio-pci driver shown properly
>0000:8f:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci
>unused=ixgbe     <- looks good, vfio-pci driver shown properly
>Network devices using kernel driver
>===================================
>0000:07:00.0 'VIC Ethernet NIC' if=enp7s0 drv=enic unused=vfio-pci
>0000:08:00.0 'VIC Ethernet NIC' if=enp8s0 drv=enic unused=vfio-pci
>0000:0f:00.0 'I350 Gigabit Network Connection' if=enp15s0f0 drv=igb unused=vfio-pci
>*Active*
>>Other network devices
>=====================
><none>
>root at caesar:/home/cisco#
>
>If I tweak the OVS Config as described in the Ubuntu DPDK Gude with the following line
>  echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm --vhost-perm
>0664'" | sudo tee -a /etc/default/openvswitch-switch
>I will get the following error message:
>root at caesar:/home/cisco# ovs-vsctl show
>cf57d236-c8ec-4099-a621-8fda17920828
>    Bridge "ovsdpdkbr0"
>        Port "dpdk0"
>            Interface "dpdk0"
>                type: dpdk
>                error: "could not open network device dpdk0 (Cannot allocate memory)"
>        Port "ovsdpdkbr0"
>            Interface "ovsdpdkbr0"
>                type: internal
>    ovs_version: "2.5.0"
>root at caesar:/home/cisco#
>
>My UCS C240 Server has two nodes with 18 cores each. In the following forum
>http://comments.gmane.org/gmane.linux.network.openvswitch.general/6760
>I saw similar issue and the solution was to configure memory like this:
>---
>Start vswitchd process with 8GB on each numa node (if reserve memory on just 1 numa node,
>creating dpdk port will fail: cannot allocate memory)
>./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 8192,8192 --
>unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
>---
>
>If I change /etc/default/openvswitch-switch to
>  DPDK_OPTS='--dpdk -c 0x1 -n 4 --socket-mem 4096,4096 --vhost-owner libvirt-qemu:kvm --
>vhost-perm 0664'
>then I can enter OVS CLI commandos, but have to use “Ctrl +C” to get prompt after any OVS
>CLI. But it looks, like OVS accepts and executes CLIs.
>I can create OVS DPDK bridges, but OVS cannot create a vhost_user socket at
>/var/run/openvswitch/vhost-user-1 – the following CLI does not work:
>
>cisco at caesar:~$ sudo ovs-vsctl add-port ovsdpdkbr1 vhost-user-1 -- set Interface vhost-
>user-1 type=dpdkvhostuser
>^C2016-05-26T17:11:16Z|00002|fatal_signal|WARN|terminating with signal 2 (Interrupt)
>
>cisco at caesar:~$ sudo ovs-vsctl show
>cf57d236-c8ec-4099-a621-8fda17920828
>    Bridge "ovsdpdkbr2"
>        Port "ovsdpdkbr2"
>            Interface "ovsdpdkbr2"
>                type: internal
>        Port "dpdk1"
>            Interface "dpdk1"
>                type: dpdk
>    Bridge "ovsdpdkbr1"
>        Port "vhost-user-1"
>            Interface "vhost-user-1"
>                type: dpdkvhostuser
>        Port "ovsdpdkbr1"
>            Interface "ovsdpdkbr1"
>                type: internal
>        Port "dpdk0"
>            Interface "dpdk0"
>                type: dpdk
>    ovs_version: "2.5.0"
>cisco at caesar:~$
>
>There is NO vhost-user-1 in /var/run/openvswitch/
>cisco at caesar:~$ ls -la /var/run/openvswitch/
>total 4
>drwxr-xr-x  2 root root  100 May 26 11:51 .
>drwxr-xr-x 27 root root 1040 May 26 12:06 ..
>srwxr-x---  1 root root    0 May 26 11:49 db.sock
>srwxr-x---  1 root root    0 May 26 11:49 ovsdb-server.5559.ctl
>-rw-r--r--  1 root root    5 May 26 11:49 ovsdb-server.pid
>cisco at caesar:~$
>cisco at caesar:~$
>
>
>So, my questions are:
>1. What is the right config line for servers with two physical CPU (in my case node0 and
>node1 with 18 CPUs each) for
>echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm --vhost-perm
>0664'" | sudo tee -a /etc/default/openvswitch-switch

Hi Nikolai,

You mentioned that when you specify the -m argument as '2048', you cannot add dpdk0, but when you specify "-m 4096, 4096" (i.e. 4k for NUMA node 0, 4k for NUMA node 1)that dpdk phy ports are added successfully.
This leads me to believe that your NICs are installed on the PCI slots for NUMA node 1 - this is easily confirmed by use of the 'lstopo' tool, part of the 'hwloc' package: https://www.open-mpi.org/projects/hwloc/.
To correct this, either move your NICs to the PCI slots for NUMA node 0, or change your -m argument to "0, 2048".

Hope this helps,
Mark
 
>
>2. How can OVS create a vhost_user socket at /var/run/openvswitch/vhost-user-1 ?
>
>
>
>And yes, HugePage support is enabled:
>root at caesar:/home/cisco# cat /proc/meminfo | grep Huge
>AnonHugePages:     16384 kB
>HugePages_Total:      64
>HugePages_Free:        0
>HugePages_Rsvd:        0
>HugePages_Surp:        0
>Hugepagesize:       2048 kB
>root at caesar:/home/cisco#
>
>In /etc/default/grub I have:
>GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt intel_iommu=on hugepages=8192 hugepagesz=1G
>hugepages=8 isolcpus=4,5,6,7,8"
>
>
>Thanks,
>Nikolai


More information about the discuss mailing list