[ovs-dev] packets getting dropped in vhostuser port

Kapil Adhikesavalu kapil20084 at gmail.com
Thu Oct 20 13:58:02 UTC 2016


Hi

I was moving to latest version of ovs + dpdk for jumbo frame support.
Will i be able to get jumbo frame support in ovs 2.5.9 + dpdk 16.04 ? i
dont want a way to configure mtu as in 2.6, just a hard coded jumbo frame
size should do.

i see testpmd has a way to configure mtu on dpdk(in 16.04), So while
creating the dpdk interface through ovs, will i able to modify ovs code to
pass the default mtu size as 2000 and create the dpdk port ?

Regards
Kapil

On Wed, Oct 19, 2016 at 8:30 PM, Kapil Adhikesavalu <kapil20084 at gmail.com>
wrote:

> There seem to be some version dependency between the two.
> dpdk-16.04 with ovs2.6 is not possible as ovs is referencing some newer
> dpdk code.
>
> i tried the reverse, dpdk16.07 and ovs2.5.9, but getting the following
> error (rpm based build)
>
> + cd dpdk
> + make config T=x86_64-atm-linuxapp-gcc
> make[1]: *** No rule to make target '/tmp/kapil/rpm-build/
> openvswitch-2.5.90/dpdk/config/defconfig_x86_64-atm-linuxapp-gcc', needed
> by '/tmp/kapil/rpm-build/openvswitch-2.5.90/dpdk/build/.config'.  Stop.
> /tmp/kapil/rpm-build/openvswitch-2.5.90/dpdk/mk/rte.sdkroot.mk:90: recipe
> for target 'config' failed
>
> its building fine with 16.04, only if replace with 16.07, getting this
> error.
>
>
> On Wed, Oct 19, 2016 at 7:24 PM, Xu, Qian Q <qian.q.xu at intel.com> wrote:
>
>> One suggestion, could you keep dpdk as dpdk-16.04, only ovs upgrade, then
>> see if the case still happen. Or you can keep only dpdk changed but no OVS
>> change. We need to narrow down the issue. From my knowledge, qemu-2.4.1 is
>> fine for the case.
>>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at openvswitch.org] On Behalf Of Kapil
>> Adhikesavalu
>> Sent: Wednesday, October 19, 2016 2:18 PM
>> To: dev at openvswitch.org; discuss at openvswitch.org
>> Subject: [ovs-dev] packets getting dropped in vhostuser port
>>
>> Hi,
>>
>> In a PHY-VM-PHY setup, i was using dpdk16.04 + OVS 2.5.90 + qemu 2.4.1,
>> everything was working fine.
>> When i upgraded to dpdk 16.07 + ovs 2.6 (branch-2.6) without any change
>> to the setup/configuration; now all the packets(1200B) are getting dropped
>> at vhostuser ports. Though, i am able to send traffic from dpdk PHY to PHY
>> with the same setup,
>>
>> Is there any qemu version dependency here ? some input on how to debug
>> this will help.
>> One thing that looks a bit odd is ovs-vswitchd start logs, i dont see PCI
>> memory map in the logs(mentioned at the end).
>> Let me know if any other logs are required.
>>
>> [root at localhost ~]# ovs-vsctl show
>> e622d6bc-ea72-4232-a035-e1aa75c5887a
>>     Bridge "br-dpdk0"
>>         Port "dpdk0"
>>             Interface "dpdk0"
>>                 type: dpdk
>>         Port "vhost-1-0"
>>             Interface "vhost-1-0"
>>                 type: dpdkvhostuser
>>         Port "br-dpdk0"
>>             Interface "br-dpdk0"
>>                 type: internal
>>     Bridge "br-dpdk1"
>>         Port "br-dpdk1"
>>             Interface "br-dpdk1"
>>                 type: internal
>>         Port "vhost-1-1"
>>             Interface "vhost-1-1"
>>                 type: dpdkvhostuser
>>         Port "dpdk1"
>>             Interface "dpdk1"
>>                 type: dpdk
>>
>> [root at localhost ~]# ovs-vsctl -Version
>> ovs-vsctl (Open vSwitch) 2.6.1
>> DB Schema 7.14.0
>>
>> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name ona-vm-1 -S -machine
>> pc-i440fx-2.4,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp
>> 1,sockets=2,cores=1,threads=1 -uuid d6055bcd-ce40-49a7-a3b9-4852b15fbeb1
>> -nographic -no-user-config -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ona-vm-1.mo
>> nitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
>> -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
>> -drive file=/var/lib/bristol/vm-1/1.img,if=none,id=drive-ide0-0-0,f
>> ormat=raw,cache=none
>> -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive
>> file=/root/genericx86-64.iso,if=none,id=drive-ide0-1-0,reado
>> nly=on,format=raw
>> -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
>> -netdev tap,fd=24,id=hostnet0 -device
>> e1000,netdev=hostnet0,id=net0,mac=52:54:00:ee:9f:c7,bus=pci.0,addr=0x2
>> -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0 -chardev
>> socket,id=char0,path=/var/run/openvswitch/vhost-1-0 -chardev
>> socket,id=char1,path=/var/run/openvswitch/vhost-1-1 -msg timestamp=on
>> -cpu Haswell,+pdpe1gb -rtc base=utc -numa node,memdev=mem -nographic
>> -mem-prealloc -enable-kvm -m 1024 -realtime mlock=off -device
>> virtio-net-pci,addr=0x04,netdev=net0,mac=92:10:9b:00:01:00,
>> csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=
>> off,mrg_rxbuf=off
>> -device
>> virtio-net-pci,addr=0x05,netdev=net1,mac=92:10:9b:00:01:01,
>> csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=
>> off,mrg_rxbuf=off
>> -object
>> memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on
>> -netdev type=vhost-user,id=net0,chardev=char0,vhostforce -netdev
>> type=vhost-user,id=net1,chardev=char1,vhostforce -msg timestamp=on
>>
>> [root at localhost ~]# ovs-appctl dpctl/show
>> netdev at ovs-netdev:
>>         lookups: hit:1661391 missed:38 lost:0
>>         flows: 7
>>         port 0: ovs-netdev (tap)
>>         port 1: br-dpdk0 (tap)
>>         port 2: br-dpdk1 (tap)
>>         port 3: dpdk0 (dpdk: configured_rx_queues=1,
>> configured_tx_queues=5, mtu=1500, requested_rx_queues=1,
>> requested_tx_queues=5)
>>         port 4: dpdk1 (dpdk: configured_rx_queues=1,
>> configured_tx_queues=5, mtu=1500, requested_rx_queues=1,
>> requested_tx_queues=5)
>>         port 5: vhost-1-0 (dpdkvhostuser: configured_rx_queues=1,
>> configured_tx_queues=1, mtu=1500, requested_rx_queues=1,
>> requested_tx_queues=1)
>>         port 6: vhost-1-1 (dpdkvhostuser: configured_rx_queues=1,
>> configured_tx_queues=1, mtu=1500, requested_rx_queues=1,
>> requested_tx_queues=1)
>>
>> [root at localhost ~]# ovs-appctl dpif-netdev/pmd-rxq-show pmd thread
>> numa_id 0 core_id 1:
>>         isolated : false
>>         port: vhost-1-1 queue-id: 0
>>         port: dpdk1     queue-id: 0
>>         port: vhost-1-0 queue-id: 0
>>         port: dpdk0     queue-id: 0
>>
>> Drops: (traffic sent from br-dpdk1's port1 to port 2 to vm) =====
>> [root at localhost ~]# ovs-ofctl dump-ports br-dpdk1 OFPST_PORT reply
>> (xid=0x2): 3 ports
>>   port LOCAL: rx pkts=14, bytes=1156, drop=0, errs=0, frame=0, over=0,
>> crc=0
>>            tx pkts=181918, bytes=108422056, drop=0, errs=0, coll=0
>>   port  1: rx pkts=181927, bytes=108422763, drop=0, errs=0, frame=?,
>> over=?, crc=?
>>            tx pkts=8, bytes=648, drop=0, errs=0, coll=?
>>   port  2: rx pkts=?, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
>>            tx pkts=?, bytes=0, drop=181923, errs=?, coll=?
>>
>>
>> [root at localhost ~]# ovs-ofctl dump-ports br-dpdk0 OFPST_PORT reply
>> (xid=0x2): 3 ports
>>   port LOCAL: rx pkts=13, bytes=1066, drop=0, errs=0, frame=0, over=0,
>> crc=0
>>            tx pkts=19, bytes=2296, drop=0, errs=0, coll=0
>>   port  1: rx pkts=22, bytes=3010, drop=0, errs=0, frame=?, over=?, crc=?
>>            tx pkts=23, bytes=2914, drop=0, errs=0, coll=?
>>   port  2: rx pkts=11, bytes=2016, drop=0, errs=0, frame=?, over=?, crc=?
>>            tx pkts=?, bytes=0, drop=21, errs=?, coll=?
>>
>>
>> ovs-vswitchd start logs: [one thing i observe differently is, earlier with
>> 16.04 + 2.5.90, pci memory mapped values are display which are not seen
>> with my latest image] ================
>>
>> PMD: bnxt_rte_pmd_init() called for (null)
>> EAL: PCI device 0000:00:14.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL: PCI device 0000:00:14.1 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL: PCI device 0000:00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL: PCI device 0000:00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL: PCI device 0000:01:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1533 rte_igb_pmd
>> EAL: PCI device 0000:02:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1533 rte_igb_pmd
>> Zone 0: name:<rte_eth_dev_data>, phys:0x23cec40, len:0x30100,
>> virt:0x7f0c59fcec40, socket_id:0, flags:0
>>
>> earlier log:
>> =======
>>
>> EAL: PCI device 0000:00:14.0 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x7fe64e400000
>> EAL:   PCI memory mapped at 0x7fe64e420000
>> PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1f41
>> EAL: PCI device 0000:00:14.1 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x7fe64e424000
>> EAL:   PCI memory mapped at 0x7fe64e444000
>> PMD: eth_igb_dev_init(): port_id 1 vendorID=0x8086 deviceID=0x1f41
>> EAL: PCI device 0000:00:14.2 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x7fe64e448000
>> EAL:   PCI memory mapped at 0x7fe64e468000
>> PMD: eth_igb_dev_init(): port_id 2 vendorID=0x8086 deviceID=0x1f41
>> EAL: PCI device 0000:00:14.3 on NUMA socket -1
>> EAL:   probe driver: 8086:1f41 rte_igb_pmd
>> EAL:   PCI memory mapped at 0x7fe64e46c000
>> EAL:   PCI memory mapped at 0x7fe64e48c000
>>
>>
>> Regards
>> Kapil.
>> _______________________________________________
>> dev mailing list
>> dev at openvswitch.org
>> http://openvswitch.org/mailman/listinfo/dev
>>
>
>



More information about the dev mailing list