[ovs-discuss] can not ping host in VM when using dpdkvhostuser

刘超 popsuper1982 at qq.com
Fri Apr 22 13:14:46 UTC 2016


Hi, I am following the guide: http://openvswitch.org/support/dist-docs-2.5/INSTALL.DPDK.md.html to deploy ovs + dpdk + qemu

At last I find that, I can not ping br0 on host in the VM


if I dump the packages with ovs-ofctl


  port  3: rx pkts=127, bytes=5646, drop=?, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=194, errs=?, coll=?



I found that vhost-user1 port, it can receive packages from VM, so rx is not zero.


But it can not send packages to VM, so tx is zero, all packages are droped


So did you guys face this problem? how to fix that?



-----------------------detailed steps--------------------------
1. set the environment


[root at localhost ~]# cat dpdkrc 
export DPDK_DIR=/root/dpdk-2.2.0
export RTE_SDK=/root/dpdk-2.2.0
export RTE_TARGET=x86_64-native-linuxapp-gcc
export DESTDIR=/usr/local/bin/dpdk
export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc
export OVS_DIR=/root/openvswitch-2.5.0



2. build the dpdk


cd to /root/dpdk-2.2.0


sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i config/common_linuxapp


make install T=x86_64-native-linuxapp-gcc


3. build the ovs


cd $OVS_DIR
./boot.sh 
./configure --with-dpdk=$DPDK_BUILD
make


4. modify kernel paramters and reboot


[root at localhost ~]# cat /proc/cmdline 
BOOT_IMAGE=/vmlinuz-4.0.4-301.fc22.x86_64 root=/dev/mapper/centos-root ro rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto vconsole.keymap=us rhgb quiet nouveau.modeset=0 rd.driver.blacklist=nouveau default_hugepagesz=1G hugepagesz=1G hugepages=8


5. install uio


modprobe uio


insmod $DPDK_BUILD/kmod/igb_uio.ko


6. bind the eth card


[root at localhost ~]# dpdk_nic_bind.py --status


Network devices using DPDK-compatible driver
============================================
0000:15:00.1 'I350 Gigabit Network Connection' drv=igb_uio unused=
0000:15:00.3 'I350 Gigabit Network Connection' drv=igb_uio unused=


Network devices using kernel driver
===================================
0000:0b:00.0 'NetXtreme II BCM5709 Gigabit Ethernet' if=enp11s0f0 drv=bnx2 unused=igb_uio 
0000:0b:00.1 'NetXtreme II BCM5709 Gigabit Ethernet' if=enp11s0f1 drv=bnx2 unused=igb_uio *Active*
0000:10:00.0 'NetXtreme II BCM5709 Gigabit Ethernet' if=enp16s0f0 drv=bnx2 unused=igb_uio 
0000:10:00.1 'NetXtreme II BCM5709 Gigabit Ethernet' if=enp16s0f1 drv=bnx2 unused=igb_uio 
0000:15:00.0 'I350 Gigabit Network Connection' if=ens2f0 drv=igb unused=igb_uio 
0000:15:00.2 'I350 Gigabit Network Connection' if=ens2f2 drv=igb unused=igb_uio 




7. mount hugepage


mount -t hugetlbfs -o pagesize=1G none /dev/hugepages


[root at localhost ~]# mount
...
none on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=1G)


8. setup and run db


$OVS_DIR/ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach --log-file



9. start ovs-vswitchd


[root at localhost ~]# $OVS_DIR/vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0
2016-04-22T13:03:04Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to /usr/local/var/run/openvswitch
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 9 on socket 0
EAL: Detected lcore 3 as core 10 on socket 0
EAL: Detected lcore 4 as core 0 on socket 1
EAL: Detected lcore 5 as core 1 on socket 1
EAL: Detected lcore 6 as core 9 on socket 1
EAL: Detected lcore 7 as core 10 on socket 1
EAL: Detected lcore 8 as core 0 on socket 0
EAL: Detected lcore 9 as core 1 on socket 0
EAL: Detected lcore 10 as core 9 on socket 0
EAL: Detected lcore 11 as core 10 on socket 0
EAL: Detected lcore 12 as core 0 on socket 1
EAL: Detected lcore 13 as core 1 on socket 1
EAL: Detected lcore 14 as core 9 on socket 1
EAL: Detected lcore 15 as core 10 on socket 1
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x100000000 bytes
EAL: Virtual area found at 0x7fa9c0000000 (size = 0x100000000)
EAL: Ask a virtual area of 0x80000000 bytes
EAL: Virtual area found at 0x7fa900000000 (size = 0x80000000)
EAL: Ask a virtual area of 0x40000000 bytes
EAL: Virtual area found at 0x7fa880000000 (size = 0x40000000)
EAL: Requesting 1 pages of size 1024MB from socket 0
EAL: TSC frequency is ~2400082 KHz
EAL: Master lcore 0 is ready (tid=55128c00;cpuset=[0])
EAL: PCI device 0000:15:00.0 on NUMA socket -1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:15:00.1 on NUMA socket -1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7faa00000000
EAL:   PCI memory mapped at 0x7faa00100000
PMD: eth_igb_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x1521
EAL: PCI device 0000:15:00.2 on NUMA socket -1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:15:00.3 on NUMA socket -1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   PCI memory mapped at 0x7faa00104000
EAL:   PCI memory mapped at 0x7faa00204000
PMD: eth_igb_dev_init(): port_id 1 vendorID=0x8086 deviceID=0x1521
Zone 0: name:<RG_MP_log_history>, phys:0x17fffdec0, len:0x2080, virt:0x7fa9ffffdec0, socket_id:0, flags:0
Zone 1: name:<MP_log_history>, phys:0x17fd73d40, len:0x28a0c0, virt:0x7fa9ffd73d40, socket_id:0, flags:0
Zone 2: name:<rte_eth_dev_data>, phys:0x17fd43380, len:0x2f700, virt:0x7fa9ffd43380, socket_id:0, flags:0
2016-04-22T13:03:08Z|00002|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 0
2016-04-22T13:03:08Z|00003|ovs_numa|INFO|Discovered 8 CPU cores on NUMA node 1
2016-04-22T13:03:08Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 16 CPU cores
2016-04-22T13:03:08Z|00005|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connecting...
2016-04-22T13:03:08Z|00006|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connected
2016-04-22T13:03:08Z|00007|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports recirculation
2016-04-22T13:03:08Z|00008|ofproto_dpif|INFO|netdev at ovs-netdev: MPLS label stack length probed as 3
2016-04-22T13:03:08Z|00009|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath supports unique flow ids
2016-04-22T13:03:08Z|00010|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath does not support ct_state
2016-04-22T13:03:08Z|00011|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath does not support ct_zone
2016-04-22T13:03:08Z|00012|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath does not support ct_mark
2016-04-22T13:03:08Z|00013|ofproto_dpif|INFO|netdev at ovs-netdev: Datapath does not support ct_label
VHOST_CONFIG: socket created, fd:32
VHOST_CONFIG: bind to /usr/local/var/run/openvswitch/vhost-user2
2016-04-22T13:03:08Z|00014|dpdk|INFO|Socket /usr/local/var/run/openvswitch/vhost-user2 created for vhost-user port vhost-user2
2016-04-22T13:03:08Z|00015|dpif_netdev|INFO|Created 1 pmd threads on numa node 0
2016-04-22T13:03:08Z|00016|bridge|INFO|bridge br0: added interface vhost-user2 on port 1
2016-04-22T13:03:08Z|00017|bridge|INFO|bridge br0: added interface br0 on port 65534
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da4940c0 hw_ring=0x7fa9da49c100 dma_addr=0x15a49c100
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fa9da47ff40 hw_ring=0x7fa9da483f80 dma_addr=0x15a483f80
2016-04-22T13:03:08Z|00001|dpif_netdev(pmd12)|INFO|Core 0 processing port 'vhost-user2'
PMD: eth_igb_start(): <<
2016-04-22T13:03:08Z|00018|dpdk|INFO|Port 0: a0:36:9f:a1:dc:f9
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da4940c0 hw_ring=0x7fa9da49c100 dma_addr=0x15a49c100
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da467ec0 hw_ring=0x7fa9da46ff00 dma_addr=0x15a46ff00
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da44fe40 hw_ring=0x7fa9da457e80 dma_addr=0x15a457e80
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da437dc0 hw_ring=0x7fa9da43fe00 dma_addr=0x15a43fe00
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da41fd40 hw_ring=0x7fa9da427d80 dma_addr=0x15a427d80
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da407cc0 hw_ring=0x7fa9da40fd00 dma_addr=0x15a40fd00
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da3efc40 hw_ring=0x7fa9da3f7c80 dma_addr=0x15a3f7c80
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fa9da3d7bc0 hw_ring=0x7fa9da3dfc00 dma_addr=0x15a3dfc00
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fa9da47ff40 hw_ring=0x7fa9da483f80 dma_addr=0x15a483f80
PMD: eth_igb_start(): <<
2016-04-22T13:03:08Z|00019|dpdk|INFO|Port 0: a0:36:9f:a1:dc:f9
2016-04-22T13:03:08Z|00002|dpif_netdev(pmd12)|INFO|Core 0 processing port 'vhost-user2'
2016-04-22T13:03:08Z|00003|dpif_netdev(pmd12)|INFO|Core 0 processing port 'dpdk0'
2016-04-22T13:03:08Z|00020|bridge|INFO|bridge br0: added interface dpdk0 on port 2
VHOST_CONFIG: socket created, fd:43
VHOST_CONFIG: bind to /usr/local/var/run/openvswitch/vhost-user1
2016-04-22T13:03:08Z|00021|dpdk|INFO|Socket /usr/local/var/run/openvswitch/vhost-user1 created for vhost-user port vhost-user1
2016-04-22T13:03:08Z|00004|dpif_netdev(pmd12)|INFO|Core 0 processing port 'vhost-user2'
2016-04-22T13:03:08Z|00005|dpif_netdev(pmd12)|INFO|Core 0 processing port 'dpdk0'
2016-04-22T13:03:08Z|00006|dpif_netdev(pmd12)|INFO|Core 0 processing port 'vhost-user1'
2016-04-22T13:03:08Z|00022|bridge|INFO|bridge br0: added interface vhost-user1 on port 3
2016-04-22T13:03:08Z|00023|bridge|INFO|bridge br0: using datapath ID 0000a0369fa1dcf9
2016-04-22T13:03:08Z|00024|connmgr|INFO|br0: added service controller "punix:/usr/local/var/run/openvswitch/br0.mgmt"
2016-04-22T13:03:08Z|00025|dpif_netdev|INFO|Created 1 pmd threads on numa node 0
2016-04-22T13:03:08Z|00026|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.5.0
2016-04-22T13:03:08Z|00001|dpif_netdev(pmd20)|INFO|Core 0 processing port 'vhost-user2'
2016-04-22T13:03:08Z|00002|dpif_netdev(pmd20)|INFO|Core 0 processing port 'dpdk0'
2016-04-22T13:03:08Z|00003|dpif_netdev(pmd20)|INFO|Core 0 processing port 'vhost-user1'
2016-04-22T13:03:15Z|00027|memory|INFO|17240 kB peak resident set size after 10.2 seconds
2016-04-22T13:03:15Z|00028|memory|INFO|handlers:5 ports:4 revalidators:3 rules:5



10. create bridges and port


$OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
$OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk


ovs-vsctl add-port br0 vhost-user-2 -- set Interface vhost-user-2 type=dpdkvhostuser
ovs-vsctl add-port br0 vhost-user-2 -- set Interface vhost-user-2 type=dpdkvhostuser


11. run qemu


[root at localhost ~]# qemu-system-x86_64 -enable-kvm -name ubuntutest -smp 4 -cpu host -m 1024 -hda /home/kvm/dpdknode1.qcow2 -vnc :19 -net user,hostfwd=tcp::10022-:22 -net nic -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc
qemu-system-x86_64: -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce: chardev "char1" went up


[root at localhost ~]# qemu-system-x86_64 --version
QEMU emulator version 2.4.1 (qemu-2.4.1-8.fc23), Copyright (c) 2003-2008 Fabrice Bellard



12. config IP of br0 and eth0 in VM


 br0: <BROADCAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 500
    link/ether a0:36:9f:a1:dc:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.201.2/24 brd 192.168.201.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::a236:9fff:fea1:dcf9/64 scope link 
       valid_lft forever preferred_lft forever



eth0 in VM is 192.168.201.3/24


13. ping br0 in VM


I can not ping, do tcpdump on br0


21:11:00.564650 ARP, Request who-has 192.168.201.2 tell 192.168.201.3, length 28
21:11:00.564659 ARP, Reply 192.168.201.2 is-at a0:36:9f:a1:dc:f9, length 28



br0 can get arp and response but VM dose not get the response


14. ovs-ofctl dump


[root at localhost ~]# ovs-ofctl show br0      
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000a0369fa1dcf9
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(vhost-user2): addr:00:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 2(dpdk0): addr:a0:36:9f:a1:dc:f9
     config:     0
     state:      0
     current:    1GB-FD
     speed: 1000 Mbps now, 0 Mbps max
 3(vhost-user1): addr:00:00:00:00:00:00
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br0): addr:a0:36:9f:a1:dc:f9
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root at localhost ~]# ovs-ofctl dump-ports br0
OFPST_PORT reply (xid=0x2): 4 ports
  port  1: rx pkts=0, bytes=0, drop=?, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=202, errs=?, coll=?
  port LOCAL: rx pkts=127, bytes=5646, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=194, bytes=18000, drop=0, errs=0, coll=0
  port  2: rx pkts=67, bytes=12354, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=135, bytes=8436, drop=0, errs=0, coll=?
  port  3: rx pkts=127, bytes=5646, drop=?, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=194, errs=?, coll=?




I found that vhost-user1 port, it can receive packages from VM, so rx is not zero.


But it can not send packages to VM, so tx is zero, all packages are droped


So did you guys face this problem? how to fix that?


Thanks


Yours


Liu Chao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20160422/d7ca25da/attachment-0002.html>


More information about the discuss mailing list