[ovs-discuss] OVS - QEMU unable to create vhostuserclient socket at /var/run/openvswitch

Joan Vidal Joan.Vidal at omniaccess.com
Fri May 17 09:13:08 UTC 2019


Hi Flavio,

SELinux is disabled.


Joan

________________________________
De: Flavio Leitner <fbl at sysclose.org>
Enviado: viernes, 17 de mayo de 2019 10:35
Para: Joan Vidal
Cc: Ian Stokes; ovs-discuss at openvswitch.org
Asunto: Re: [ovs-discuss] OVS - QEMU unable to create vhostuserclient socket at /var/run/openvswitch


Do you have SELinux enabled? Sounds like the policies are not
updated.
fbl

On Thu, May 16, 2019 at 04:16:04PM +0000, Joan Vidal wrote:
> Hi Ian,
>
> Upgraded QEMU version to 2.10.0(qemu-kvm-ev-2.10.0-21.el7_5.7.1)
> Changed /etc/libvirt/qemu.conf
> user = "root"
> group = "root"
>
> And added following lines to guest XML definition:
>
>
>   <cputune>
>     <vcpupin vcpu='0' cpuset='13'/>
>     <vcpupin vcpu='1' cpuset='29'/>
>     <vcpupin vcpu='2' cpuset='14'/>
>     <vcpupin vcpu='3' cpuset='30'/>
>     <vcpupin vcpu='4' cpuset='15'/>
>     <vcpupin vcpu='5' cpuset='31'/>
>     <emulatorpin cpuset='13-15,29-31'/>
>   </cputune>
>
>     <numa>
>       <cell id='0' cpus='3,5' memory='8387584' unit='KiB' memAccess='shared'/>
>     </numa>
>
>     <interface type='vhostuser'>
>       <mac address='0c:c4:7a:ea:4b:b2'/>
>       <source type='unix' path='/var/run/openvswitch/vhost0' mode='server'/>
>       <target dev='vhost0'/>
>       <model type='virtio'/>
>       <driver queues='2'>
>         <host mrg_rxbuf='on'/>
>       </driver>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
>     </interface>
>     <interface type='vhostuser'>
>       <mac address='0c:c4:7a:ea:4b:b3'/>
>       <source type='unix' path='/var/run/openvswitch/vhost1' mode='server'/>
>       <target dev='vhost1'/>
>       <model type='virtio'/>
>       <driver queues='2'>
>         <host mrg_rxbuf='on'/>
>       </driver>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
>     </interface>
>
>
> Still getting same error:
>
> 2019-05-16T16:07:27.921191Z qemu-kvm: -chardev socket,id=charnet2,path=/var/run/openvswitch/vhost0,server: Failed to bind socket to /var/run/openvswitch/vhost0: Permission denied
> 2019-05-16 16:07:28.140+0000: shutting down, reason=failed
>
>
>  *Joan Vidal*
>
>  *OmniAccess*
>
> -----Mensaje original-----
> De: Ian Stokes <ian.stokes at intel.com>
> Enviado el: 16 May 2019 16:33
> Para: Joan Vidal <Joan.Vidal at omniaccess.com>; ovs-discuss at openvswitch.org
> Asunto: Re: [ovs-discuss] OVS - QEMU unable to create vhostuserclient socket at /var/run/openvswitch
>
> On 5/16/2019 3:04 PM, Joan Vidal wrote:
> > Hi,
> >
> > I'm trying to use OVS-DPDK  vhostuserclient ports with a qemu guest on
> > a CentOS host.
> >
> > QEMU guest fails to start with the following error:
> >
> > error: internal error: process exited while connecting to monitor:
> > 2019-05-16T13:15:27.481680Z qemu-kvm: -chardev
> > socket,id=charnet2,path=/var/run/openvswitch/vhost0,server: Failed to
> > bind socket: Permission denied 2019-05-16T13:15:27.482078Z qemu-kvm:
> > -chardev
> > socket,id=charnet2,path=/var/run/openvswitch/vhost0,server: chardev:
> > opening backend "socket" failed
> >
> >
> > Seems to be an issue with qemu-kvm permission to create the sockets in
> > /var/run/openvswitch Followed this article
> > https://www.redhat.com/en/blog/ovs-dpdk-migrating-vhostuser-socket-mod
> > e-red-hat-openstack
> >
> > But is showing same error. Even running qemu-kvm and openvswitch with
> > root is still returning Permission denied.
> >
> >
> > This is the system configuration:
> >
> > --------------------------
> > Software versions:
> >
> > (Host) CentOS Linux release 7.6.1810 (Core)
> > (Guest) CentOS Linux release 7.4.1708 (Core) QEMU emulator version
> > 1.5.3 (qemu-kvm-1.5.3-160.el7_6.1)
>
> The minimum QEMU version recommended for use with vhostuserclient is QEMU 2.7 (See OVS Documentation below).
>
> http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#vhost-user-vs-vhost-user-client
>
> Do you see the same issue when using QEMU 2.7?
>
> Also from below it looks like you are using libvirt to configure QEMU?
>
> If so have you followed the steps outlined in the link below
>
> http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#adding-vhost-user-ports-to-the-guest-libvirt
>
> Ian
>
> > ovs-vswitchd (Open vSwitch) 2.11.1
> > DPDK 18.11.0
> >
> >
> > --------------------------
> > OVS configuration
> >
> >
> > #ovs-vsctl show
> > 462375d2-8f6a-4a72-ad49-af8c00720da9
> >      Bridge br-subscriber
> >          Port br-subscriber
> >              Interface br-subscriber
> >                  type: internal
> >          Port "ens1f0"
> >              Interface "ens1f0"
> >                  type: dpdk
> >                  options: {dpdk-devargs="0000:81:00.0"}
> >          Port "vhost0"
> >              Interface "vhost0"
> >                  type: dpdkvhostuserclient
> >                  options:
> > {vhost-server-path="/var/run/openvswitch/vhost0"}
> >      Bridge br-internet
> >          Port br-internet
> >              Interface br-internet
> >                  type: internal
> >          Port "vhost1"
> >              Interface "vhost1"
> >                  type: dpdkvhostuserclient
> >                  options:
> > {vhost-server-path="/var/run/openvswitch/vhost1"}
> >          Port "ens1f1"
> >              Interface "ens1f1"
> >                  type: dpdk
> >                  options: {dpdk-devargs="0000:81:00.1"}
> >      ovs_version: "2.11.1"
> >
> > #ovs-vsctl get Open_vSwitch . other_config {dpdk-init="true",
> > dpdk-socket-mem="0,2048", pmd-cpu-mask="0x02000200"}
> >
> >
> > #cat /etc/sysconfig/openvswitch
> > OPTIONS=""
> > OVS_USER_ID="openvswitch:hugetlbfs"
> >
> >
> > # ls -la /var/run/openvswitch/
> > total 12
> > drwxrwsr-x.  3 openvswitch hugetlbfs 260 May 16 12:00 .
> > drwxr-xr-x. 29 root        root      920 May 16 12:14 ..
> > srwxr-x---.  1 openvswitch hugetlbfs   0 May 16 12:00 br-internet.mgmt
> > srwxr-x---.  1 openvswitch hugetlbfs   0 May 16 12:00
> > br-internet.snoop srwxr-x---.  1 openvswitch hugetlbfs   0 May 16
> > 12:00 br-subscriber.mgmt srwxr-x---.  1 openvswitch hugetlbfs   0 May
> > 16 12:00 br-subscriber.snoop srwxr-x---.  1 openvswitch hugetlbfs   0
> > May 16 12:00 db.sock drwx------.  3 openvswitch hugetlbfs  60 May 16
> > 12:00 dpdk srwxr-x---.  1 openvswitch hugetlbfs   0 May 16 12:00
> > ovsdb-server.21194.ctl -rw-r--r--.  1 openvswitch hugetlbfs   6 May 16
> > 12:00 ovsdb-server.pid srwxr-x---.  1 openvswitch hugetlbfs   0 May 16
> > 12:00 ovs-vswitchd.21250.ctl -rw-r--r--.  1 openvswitch hugetlbfs   6
> > May 16 12:00 ovs-vswitchd.pid -rw-r--r--.  1 root        root       41
> > May 16 12:00 useropts
> >
> >
> > --------------------------
> > qemu configuration
> >
> > #cat /etc/libvirt/qemu.conf
> > user = "qemu"
> > group = "hugetlbfs"
> >
> > Vhostuser parameters used when starting guest with
> > /usr/libexec/qemu-kvm
> >
> > -chardev socket,id=charnet2,path=/var/run/openvswitch/vhost0,server
> > -netdev vhost-user,chardev=charnet2,id=hostnet2 -device
> > virtio-net-pci,netdev=hostnet2,id=net2,mac=0c:c4:7a:ea:4b:b2,bus=pci.0
> > ,addr=0x5 -chardev
> > socket,id=charnet3,path=/var/run/openvswitch/vhost1,server
> > -netdev vhost-user,chardev=charnet3,id=hostnet3 -device
> > virtio-net-pci,netdev=hostnet3,id=net3,mac=0c:c4:7a:ea:4b:b3,bus=pci.0
> > ,addr=0x6
> >
> > --------------------------
> >
> > Logs
> >
> > /var/log/openvswitch/ovs-vswitchd.log
> > 2019-05-16T13:05:23.421Z|00142|dpdk|INFO|VHOST_CONFIG: vhost-user
> > client: socket created, fd: 62
> > 2019-05-16T13:05:23.421Z|00143|netdev_dpdk|INFO|vHost User device
> > 'vhost0' created in 'client' mode, using client socket
> > '/var/run/openvswitch/vhost0'
> > 2019-05-16T13:05:23.421Z|00144|dpdk|WARN|VHOST_CONFIG: failed to
> > connect to /var/run/openvswitch/vhost0: No such file or directory
> > 2019-05-16T13:05:23.421Z|00145|dpdk|INFO|VHOST_CONFIG:
> > /var/run/openvswitch/vhost0: reconnecting...
> > 2019-05-16T13:05:23.421Z|00146|dpif_netdev|INFO|Core 9 on numa node 1
> > assigned port 'ens1f0' rx queue 0 (measured processing cycles 123318).
> > 2019-05-16T13:05:23.421Z|00147|dpif_netdev|INFO|Core 25 on numa node 1
> > assigned port 'ens1f1' rx queue 0 (measured processing cycles 46293).
> > 2019-05-16T13:05:23.421Z|00148|dpif_netdev|WARN|There's no available
> > (non-isolated) pmd thread on numa node 0. Queue 0 on port 'vhost0'
> > will be assigned to the pmd on core 25 (numa node 1). Expect reduced performance.
> > 2019-05-16T13:05:23.421Z|00149|bridge|INFO|bridge br-internet: added
> > interface vhost0 on port 3
> > 2019-05-16T13:05:58.525Z|00150|bridge|INFO|bridge br-internet: deleted
> > interface vhost0 on port 3
> > 2019-05-16T13:05:58.525Z|00151|dpif_netdev|INFO|Core 9 on numa node 1
> > assigned port 'ens1f0' rx queue 0 (measured processing cycles 244368).
> > 2019-05-16T13:05:58.525Z|00152|dpif_netdev|INFO|Core 25 on numa node 1
> > assigned port 'ens1f1' rx queue 0 (measured processing cycles 92211).
> > 2019-05-16T13:06:08.335Z|00153|dpdk|INFO|VHOST_CONFIG: vhost-user
> > client: socket created, fd: 62
> > 2019-05-16T13:06:08.335Z|00154|netdev_dpdk|INFO|vHost User device
> > 'vhost1' created in 'client' mode, using client socket
> > '/var/run/openvswitch/vhost1'
> > 2019-05-16T13:06:08.335Z|00155|dpdk|WARN|VHOST_CONFIG: failed to
> > connect to /var/run/openvswitch/vhost1: No such file or directory
> > 2019-05-16T13:06:08.335Z|00156|dpdk|INFO|VHOST_CONFIG:
> > /var/run/openvswitch/vhost1: reconnecting...
> >
> > Any idea will be apreciated, thanks!
> >
> >
> >
> > *Joan Vidal*
> >
> > *OmniAccess*
> >
> >
> >
> >
> > _______________________________________________
> > discuss mailing list
> > discuss at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
> _______________________________________________
> discuss mailing list
> discuss at openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190517/a3be0fdd/attachment-0001.html>


More information about the discuss mailing list