[ovs-dev] intra VM communication

Srikanth Akula srikanth044 at gmail.com
Thu Jul 23 11:54:56 UTC 2015


Yea you are right , it is during the vswitch init it is reaching there ,
that time there was no VM running . But once I start the VMs I don't see
ovs trying to de queue at all . (Ovs just stops to dequeue after few
attempts of failure)

Is it expected behavior ?
/srikanth



On Thursday, July 23, 2015, Loftus, Ciara <ciara.loftus at intel.com> wrote:

> >
> > Hi Ciara ,
> > When i further try to debug the issue , i could see that
> >
> >     if (OVS_UNLIKELY(!is_vhost_running(virtio_dev))) {
> >         return EAGAIN;
> > <<<<<<<<<<< it always returns from here >>>>>>>>
> >     }
> > >>>>>> I believe that dequeue has to be called to get the packets from
> > Guest to User space .
> >     nb_rx = rte_vhost_dequeue_burst(virtio_dev, qid,
> >                                     vhost_dev->dpdk_mp->mp,
> >                                     (struct rte_mbuf **)packets,
> >                                     NETDEV_MAX_BURST);
>
> When you enter that section of code it usually means your vHost device has
> not been brought up in a VM yet.
> It's unclear which of your 4 vHost devices is failing the is_vhost_running
> test, but my guess is that it is 'dpdk2' - from your setup it appears this
> device doesn't get used in a virtual machine and thus OVS will never
> attempt to dequeue from that device because it is essentially NULL.
>
> Please see below a comment on your QEMU command lines from the previous
> email.
>
> Thanks,
> Ciara
>
> >
> > Below are the logs for my vswitch with dpdkvhostuser ports.
> >
> > 2015-07-22T17:33:40.395Z|00020|bridge|INFO|bridge temp0: using datapath
> > ID 0000e295aa430244
> > 2015-07-22T17:33:40.395Z|00021|connmgr|INFO|temp0: added service
> > controller "punix:/var/run/openvswitch/temp0.mgmt"
> > 2015-07-22T17:33:40.462Z|00022|dpif_netdev|INFO|Created 1 pmd threads
> > on numa node 0
> > 2015-07-22T17:33:40.465Z|00023|bridge|INFO|ovs-vswitchd (Open vSwitch)
> > 2.4.90
> > 2015-07-22T17:33:40.466Z|00001|dpif_netdev(pmd41)|INFO|Core 0
> > processing port 'dpdk3'
> > 2015-07-22T17:33:40.466Z|00002|dpif_netdev(pmd41)|INFO|Core 0
> > processing port 'dpdk2'
> > 2015-07-22T17:33:40.466Z|00003|dpif_netdev(pmd41)|INFO|Core 0
> > processing port 'dpdk1'
> > 2015-07-22T17:33:40.466Z|00004|dpif_netdev(pmd41)|INFO|Core 0
> > processing port 'dpdk0'
> > 2015-07-22T17:33:44.470Z|00024|memory|INFO|729380 kB peak resident set
> > size after 10.3 seconds
> > 2015-07-22T17:33:44.470Z|00025|memory|INFO|handlers:13 ports:5
> > revalidators:5 rules:5
> >
> > I am under a strong opinion that i have missed some configuration here .
> > Please let me know .
> > -Srikanth
> >
> >
> > On Wed, Jul 22, 2015 at 11:30 AM, Srikanth Akula <srikanth044 at gmail.com
> <javascript:;>>
> > wrote:
> > Hi Ciera ,
> > Thank you for your reply .
> >
> > I am assuming , we dont need to configure any flows if both the ports
> are in
> > the ovs-bridge ( each is connected to a guest) , Please let me know if i
> am
> > wrong .
> > however, i tried to configure the flows too as per your suggestion , but
> still i
> > am unable to see any packets in the host for that bridge .
> >
> > I am using Qemu 2.2.0
> > qemu-system-x86_64 --version
> > QEMU emulator version 2.2.0, Copyright (c) 2003-2008 Fabrice Bellard
> >
> > My qemu commandline options :
> >
> > VM1 :::::
> >
> > /usr/bin/qemu-system-x86_64 -name Vhost1 -S -machine pc-i440fx-
> > 2.2,accel=kvm,usb=off -cpu
> > SandyBridge,+invpcid,+erms,+bmi2,+smep,+avx2,+b
> > mi1,+fsgsbase,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+movbe,+dca,+pcid
> > ,+pdcm,+xtpr,+fma,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,
> > +tm,+ht,+ss,+acpi,+ds,+vme -m 15024 -realtime mlock=off -smp 16,so
> > ckets=16,cores=1,threads=1 -uuid fed77f13-ba10-57e4-7dd8-7629e6181657 -
> > no-user-config -nodefaults -chardev
> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/Vhost1.monitor,server,no
> > wait -mon chardev=char
> > monitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot
> > strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > file=/test.img,if=none,id=drive-virtio-disk0,format=ra
> > w -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-
> > disk0,id=virtio-disk0,bootindex=1 -netdev
> > tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-
> > pci,netdev=hostnet0,id=net0
> > ,mac=52:54:00:ca:d5:80,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -
> > device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0
> -device cirrus-
> > vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-
> > pci,id=balloon0,bus=pci.0,addr=0x6 -chardev
> > socket,id=char1,path=/var/run/openvswitch/dpdk0 -netdev type=vhost-
> > user,id=mynet1,chardev=char1,vhostforce -device virtio-net-
> > pci,mac=00:00:00:00:00:01,
> > netdev=mynet1 -chardev
> > socket,id=char2,path=/var/run/openvswitch/dpdk1 -netdev type=vhost-
> > user,id=mynet2,chardev=char2,vhostforce -device virtio-net-
> > pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-
> > file,id=mem,size=2048M,mem-path=/mnt/huge/,share=on
> >
> > VM2::::
> >
> > /usr/bin/qemu-system-x86_64 -name Vhost2 -S -machine pc-i440fx-
> > 2.2,accel=kvm,usb=off -cpu
> > SandyBridge,+invpcid,+erms,+bmi2,+smep,+avx2,+b
> > mi1,+fsgsbase,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+movbe,+dca,+pcid
> > ,+pdcm,+xtpr,+fma,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,
> > +tm,+ht,+ss,+acpi,+ds,+vme -m 15024 -realtime mlock=off -smp 8,soc
> > kets=8,cores=1,threads=1 -uuid 30bc0154-7057-a7d6-12e1-7a2d8a178d47 -
> > no-user-config -nodefaults -chardev
> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/Vhost2.monitor,server,no
> > wait -mon chardev=charmo
> > nitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on
> -
> > device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > file=/test2.img,if=none,id=drive-virtio-disk0,format=raw
> >  -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-
> > disk0,id=virtio-disk0,bootindex=1 -netdev
> > tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-
> > pci,netdev=hostnet0,id=net0,
> > mac=52:54:00:4d:91:f5,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -
> > device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1
> -device cirrus-
> > vga,id=video0,bus=pci.0,addr=0x2 -device intel-
> > hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-
> > codec0,bus=sound0.0,cad=0 -device virtio-balloon-
> > pci,id=balloon0,bus=pci.0,addr=0x6 -chardev
> > socket,id=char1,path=/var/run/openvswitch/dpdk1 -netdev type=vhost-
> > user,id=mynet1,chardev=char1,vhostforce -device virtio-net-
> > pci,mac=00:00:00:00:00:03,netdev=mynet1 -chardev
> > socket,id=char2,path=/var/run/openvswitch/dpdk3 -netdev type=vhost-
> > user,id=mynet2,chardev=char2,vhostforce -device virtio-net-
> > pci,mac=00:00:00:00:00:04,netdev=mynet2 -object memory-backend-
> > file,id=mem,size=2048M,mem-path=/mnt/huge/,share=on
> > ovs-vsctl :
>
> You are attaching the 'dpdk1' device to two VMs - I expect this is why you
> are experiencing problems. I assume you intended on using dpdk2?
>
> >
> > ovs-vsctl show
> > 3c25dda6-46c4-454c-8bdf-3832636b1f71
> >     Bridge "temp0"
> >         Port "dpdk1"
> >             Interface "dpdk1"
> >                 type: dpdkvhostuser
> >         Port "temp0"
> >             Interface "temp0"
> >                 type: internal
> >         Port "dpdk2"
> >             Interface "dpdk2"
> >                 type: dpdkvhostuser
> >         Port "dpdk0"
> >             Interface "dpdk0"
> >                 type: dpdkvhostuser
> >         Port "dpdk3"
> >             Interface "dpdk3"
> >                 type: dpdkvhostuser
> >     ovs_version: "2.4.90"
> >
> > My vswitchd options
> >
> >  ovs-vswitchd --dpdk -c 0x0FF8 -n 4 --socket-mem 1024 0 --
> > unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err
> -vfile:info --
> > mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --
> > detach --monitor
> >
> >
> > ovs-ofctl dump-flows temp0
> > NXST_FLOW reply (xid=0x4):
> >  cookie=0x0, duration=871.033s, table=0, n_packets=0, n_bytes=0,
> > idle_age=871, in_port=ANY actions=output:3
> >
> > I am trying in the following way. .
> >
> >     [vm1] <dpdk1----------dpdk2> [vm2]
> > and the Ip address are in the same subnet on the 2 Vms .. (2.2.2.x/24)
> >
> >
> > Please let me know if any of the configuration is having any issues.
> > -Srikanth
> >
> >
> > On Wed, Jul 22, 2015 at 2:39 AM, Loftus, Ciara <ciara.loftus at intel.com
> <javascript:;>>
> > wrote:
> > >
> > > Hello,
> > >
> > > I am trying to use vhost-user for sending traffic between VMs . I have
> > > configured two "dpdkvhostuser" interfaces each VM using one of them
> > each
> > > .
> > >
> > > vswitchd is running with dpdk.
> > > Qemu is running with the vhost interfaces
> > >
> > > Guest OS can see interfaces - Verified with the static MAC i have
> assigned
> > > for vhost interfaces.
> > >
> > > But  i am not able to ping b/w these two VMs . Could somebody tell me
> > how
> > > to debug this further .
> >
> > Hi,
> >
> > To ping between the VMs first assign appropriate IP addresses, then
> > configure the following flows:
> > in_port=<vhostvm1>,actions=output:<vhostvm2>
> > in_port=<vhostvm2>,actions=output:<vhostvm1>
> >
> > These flows allow the request/response packets to take the necessary path
> > for a successful ping & you should see the stats incrementing with
> ovs-ofctl
> > dump-flows.
> >
> > If you've already done this and it's still not working, please ensure
> your
> > QEMU version is v2.2.0 or greater.
> >
> > Thanks,
> > Ciara
> >
> > >
> > > In the host i could see the ovs-netdev & ovs bridge i have created .
> > >
> > > Regards,
> > > Srikanth
> > > _______________________________________________
> > > dev mailing list
> > > dev at openvswitch.org <javascript:;>
> > > http://openvswitch.org/mailman/listinfo/dev
> >
>
>



More information about the dev mailing list