[ovs-dev] Phy-VM connectivity issue

ppnaik ppnaik at cse.iitb.ac.in
Wed Mar 6 12:18:10 UTC 2019


Thanks for the response Ilya.
We could get this setup working now.

However, we could not get it working when we want to give two queues to 
the VM interface.

We added the queue option when creating the interface on OVS.
We also enabled multiqueue in VM XML and gave the interface and set the 
vectors too.

ethtool inside the VM shows:

ethtool -l ens3
Channel parameters for ens3:
Pre-set maximums:
RX:		0
TX:		0
Other:		0
Combined:	2
Current hardware settings:
RX:		0
TX:		0
Other:		0
Combined:	2

However, a DPDK application inside the VM is not able to get packets 
from both queues. It still works with one queue.

Please help us resolve this issue.

Thanks,
Priyanka


On 2019-03-06 16:48, Ilya Maximets wrote:
> Hi.
>
> At first you need to look at ovs-vswitchd.log and the log of qemu.
> There might be some errors.
>
> Some thoughts inline.
>
> Best regards, Ilya Maximets.
>
>> Hi All,
>>
>> Our setup is as follows:
>>
>> We have two servers which are connected peer to peer over 40G
>> interfaces.
>>
>> On one server we have setup OVS and added the physical 40G interface 
>> as
>> a DPDK interface to the ovs bridge.
>>
>> We created another dpdkvhostuser interface for the VM. We added this
>> interface to the VM (by editing the XML). We are able to see this
>> interface inside the VM and have configure IP to the interface.
>>
>> We want to communicate between the other server and VM inside this
>> server through the OVS interface created for the VM.
>>
>> The steps we followed (on the server with OVS) are:
>>
>> modprobe uio
>
> IMHO, it's better to use vfio-pci. But it's up to you.
>
>>
>> cd /usr/src/dpdk-18.11/x86_64-native-linuxapp-gcc/kmod/
>>
>> insmod igb_uio.ko
>>
>> cd /usr/src/dpdk-18.11/usertools/
>> ./dpdk-devbind.py --bind=igb_uio 0000:81:00.1
>>
>>   export PATH=$PATH:/usr/local/share/openvswitch/scripts
>>   export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
>>
>>   ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock
>> --remote=db:Open_vSwitch,Open_vSwitch,manager_options
>> --private-key=db:Open_vSwitch,SSL,private_key
>> --certificate=db:Open_vSwitch,SSL,certificate
>> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
>> --log-file
>>
>>   ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
>>   ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start
>>
>> ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
>> ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk
>> options:dpdk-devargs=0000:81:00.1
>> ovs-vsctl add-port br0 dpdkvhostuser0     -- set Interface
>> dpdkvhostuser0 type=dpdkvhostuser ofport_request=3
>
> Consider using 'dpdkvhostuserclient' a.k.a. 'vhost-user-client' ports 
> instead
> because server mode 'dpdkvhostuser' ports are deprecated in OVS.
>
>>
>> ovs-ofctl add-flow br0 in_port=1,action=output:3
>> ovs-ofctl add-flow br0 in_port=3,action=output:1
>>
>> echo 'vm.nr_hugepages=2048' > /etc/sysctl.d/hugepages.conf
>
> Here you're allocating 2048 pages of 2MB. This is not enough
> for your setup. You're trying to allocate 4096 MB for qemu memory
> backing + OVS will need some hugepage memory for the mempools and 
> stuff.
> It'll be at total:
>     4096 + 1024 (default for OVS if you have only 1 NUMA node) MB,
> i.e. you need at least 512 more pages.
>
> Do you need to reload sysctl for changes to be applied?
>
>> grep HugePages_ /proc/meminfo
>
> So, where is the output? If the output is empty, you have no pages 
> allocated.
>
>>
>> edit VM XML to add this interface:
>
> If you're starting new VM with updated XML than I'll suggest using
> better libvirt
> syntax. i.e. it's better to use sections like "memoryBacking", 
> "interface"
> instead of manual attaching of cmdline arguments.
> See 
> http://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#sample-xml
>
>>
>> first line:
>> <domain type='kvm'
>> xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
>>
>> add before </domain> tag:
>>
>> <qemu:commandline>
>>      <qemu:arg value='-chardev'/>
>>      <qemu:arg
>> 
>> value='socket,id=char1,path=/usr/local/var/run/openvswitch/dpdkvhostuser0'/>
>>      <qemu:arg value='-netdev'/>
>>      <qemu:arg
>> value='vhost-user,id=mynet1,chardev=char1,vhostforce=on,queues=1'/>
>>      <qemu:arg value='-device'/>
>>      <qemu:arg
>> 
>> value='virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1,mq=on,vectors=4'/>
>>      <qemu:arg value='-m'/>
>>      <qemu:arg value='4096'/>
>>      <qemu:arg value='-object'/>
>>      <qemu:arg
>> 
>> value='memory-backend-file,id=mem1,size=4096M,mem-path=/dev/hugepages,share=on'/>
>>      <qemu:arg value='-mem-prealloc'/>
>>      <qemu:arg value='-numa'/>
>>      <qemu:arg value='node,memdev=mem1'/>
>>    </qemu:commandline>
>>
>> Please help us resolve this issue. I assumed ping would work between
>> the other server and the VM. But it is not working in our case. 
>> Also,
>> let us know if we are missing some setup step or if there is some
>> misconfiguration. If ping would not work can you let us know a way 
>> to
>> verify the connectivity?
>>
>> Thanks,
>> Priyanka


More information about the dev mailing list