[ovs-discuss] VHOST-VIRTIO interface via OVS is not working.

Basawaraj E N bnadagatti at altiostar.com
Tue Jun 11 09:19:04 UTC 2019


Hi,

I'm trying to use VHOST-VIRTIO interface in container via OVS setup ( setup details attached with the mail). I'm unable to ping through the VIRTIO interface via OVS.
It would be great if someone can help me out in bringing up VIRTIO interface via OVS  with the configuration details said above.
Please let us know if you need more information/logs apart from whatever is provided below.
Your input would be much appreciated!.

Thanks,
Basu


*         What you did that make the problem appear.



Problem statement: OVS with VHOST-VIRTIO port communication is not working when configured with Docker container.

Configuration/set up details is attached with this mail.

I'm trying to ping from testmachine (173.16.7.1) to KNI DPDK application (173.16.7.3) via OVS/DPDK switch, but the ping is not working.


*         What you expected to happen.
-Ping should work between testmachine (173.16.7.1) to KNI DPDK application (173.16.7.3) via OVS/DPDK switch through VHOST-VIRTIO interface.
-Vhost-virtio port counters should increment at the OVS level, but it's not ( Logs given below)


*         What actually happened.
Ping from testmachine (173.16.7.1) to KNI DPDK application (173.16.7.3) via OVS/DPDK is not working.

Following are the observations of the issue:
                -When I ping 173.16.7.3 from testmachine,   I see that,  ARP request for 173.16.7.3 is reaching up to br0, but ARP is not getting resolved ( Logs given below).
                - I see the vhost-virtio interface link is up at the OVS level when KNI application is run in the Docker session ( Logs given below)
- If I ping bridge br0 (173.16.7.2 )  from the testamchine (173.16.7.1 ) it works.
- I don't see any VIRTIO port counters are being incremented ( Logs given below).

Logs and console output at host machine, container and testmachine:
=========================================================

1.       Following is the log related to OVS bridge "br0" and IP details on host machine.
-----------------------------------------------------------------------------------------------

[root at kontronbng4 ~]#  ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.11.1
DPDK 18.11.0
[root at kontronbng4 ~]#

Note: There is no addition patch is added OVS code.

[root at kontronbng4 ~]# cat /proc/version
Linux version 3.10.0-957.5.1.rt56.916.altiostar.v36.el7.x86_64 (root at scmbng1) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP PREEMPT RT Tue Feb 26 12:50:27 UTC 2019
[root at kontronbng4 ~]#

[root at kontronbng4 ~]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
[root at kontronbng4 ~]#

OVS switch configuration details:

#ovs-ctl start --system-id=1

#ovs-vsctl --no-wait init
#ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
#ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x4
#ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x4


#ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
#ovs-vsctl add-port br0 dpdk-p1 -- set Interface dpdk-p1 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1
#ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser ofport_request=3

#ovs-ctl restart --system-id=1


[root at kontronbng4 ~]# ls -ltrh /usr/local/var/run/openvswitch/vhost-user1
srwxrwxrwx 1 root root 0 Jun 10 11:36 /usr/local/var/run/openvswitch/vhost-user1
[root at kontronbng4 ~]#



[root at kontronbng4 ~]# ovs-vsctl show
5fb2886d-f121-4efb-a0b8-4366930bc19d
    Bridge "br0"
        Port "dpdk-p1"
            Interface "dpdk-p1"
                type: dpdk
                options: {dpdk-devargs="0000:03:00.0"}
        Port "br0"
            Interface "br0"
                type: internal
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuser
    ovs_version: "2.11.1"
[root at kontronbng4 ~]#


[root at kontronbng4 ~]# ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:000000a0a5c2f0c4
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p1): addr:00:a0:a5:c2:f0:c4
     config:     0
     state:      0
     current:    10GB-FD AUTO_NEG
     speed: 10000 Mbps now, 0 Mbps max
3(vhost-user1): addr:00:00:00:00:00:00
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:00:a0:a5:c2:f0:c4
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root at kontronbng4 ~]#
[root at kontronbng4 ~]#


[root at kontronbng4 ~]# ovs-ofctl dump-ports br0
OFPST_PORT reply (xid=0x2): 3 ports
  port LOCAL: rx pkts=824457782, bytes=60966870742, drop=41523478, errs=0, frame=0, over=0, crc=0
          tx pkts=1772922817, bytes=2679720008163, drop=0, errs=0, coll=0
  port  "dpdk-p1": rx pkts=359, bytes=28000, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=177, bytes=16738, drop=0, errs=0, coll=?
  port  "vhost-user1": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=172, errs=?, coll=?
[root at kontronbng4 ~]#



4: ovs-netdev: <BROADCAST,MULTICAST,PROMISC> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:e1:fb:4d:2c:48 brd ff:ff:ff:ff:ff:ff
5: br0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:a0:a5:c2:f0:c4 brd ff:ff:ff:ff:ff:ff
    inet 173.16.7.2/24 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::2a0:a5ff:fec2:f0c4/64 scope link
       valid_lft forever preferred_lft forever



[root at kontronbng4 ~]# ovs-ofctl dump-flows br0
 cookie=0x0, duration=4382.938s, table=0, n_packets=798, n_bytes=62667, priority=0 actions=NORMAL


[root at kontronbng4 ~]# ovs-dpctl show
2019-06-10T12:51:41Z|00001|dpif_netlink|INFO|The kernel module does not support meters.
[root at kontronbng4 ~]#
[root at kontronbng4 ~]#


[root at kontronbng4 ~]# tcpdump -i br0  ( When pinged for 173.16.7.3)
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:51:32.951631 ARP, Request who-has 173-16-7-3.client.mchsi.com tell 173-16-7-1.client.mchsi.com, length 46
11:51:33.954595 ARP, Request who-has 173-16-7-3.client.mchsi.com tell 173-16-7-1.client.mchsi.com, length 46
11:51:34.956608 ARP, Request who-has 173-16-7-3.client.mchsi.com tell 173-16-7-1.client.mchsi.com, length 46



2.       Following is the IP and DPDK KNI details at the Docker container session.
---------------------------------------------------------------------------------------

-Launch container:
[root at kontronbng4 ~]# docker run -t -i  --privileged  -v /usr/local/var/run/openvswitch/vhost-user1:/var/run/usvhost  -v /dev/hugepages/:/dev/hugepages/  -v   /lib/modules/3.10.0-957.5.1.rt56.916.altiostar.v36.el7.x86_64/build/:/lib/modules/3.10.0-957.5.1.rt56.916.altiostar.v36.el7.x86_64/build/  -v  /usr/src/docker/dpdk/dpdk-18.11/:/usr/src/docker/dpdk/dpdk-18.11/  --name container2 centos-7_1_iperf bash
[root at b936e5d85214 /]#


-DPDK version :    dpdk-18.11



-Running KNI DPDK app:
[root at b936e5d85214 build]# ./kni -l 0-4 -n 4 --no-pci --vdev=virtio_user1,path=/var/run/usvhost --file-prefix=container -- -p 0x1 -m --config="(0,2,3,4)"
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/container/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
update_memory_region(): Too many memory regions
APP: Initialising port 0 ...
KNI: pci: 00:00:00          00:00

Checking link status
done
Port0 Link Up - speed 10000Mbps - full-duplex
APP: ========================
APP: KNI Running
APP: kill -SIGUSR1 21
APP:     Show KNI Statistics.
APP: kill -SIGUSR2 21
APP:     Zero KNI Statistics.
APP: ========================
APP: Lcore 1 has nothing to do
APP: Lcore 2 is reading from port 0
APP: Lcore 3 is writing to port 0
APP: Lcore 4 has nothing to do
APP: Lcore 0 has nothing to do
APP: Configure network interface of 0 up
APP: vEth0_0 NIC Link is Up 10000 Mbps (Fixed) Full Duplex.


[root at b936e5d85214 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: vEth0_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 22:3c:1a:4e:a4:9b brd ff:ff:ff:ff:ff:ff
    inet 173.16.7.3/24 scope global vEth0_0
       valid_lft forever preferred_lft forever
    inet6 fe80::203c:1aff:fe4e:a49b/64 scope link
       valid_lft forever preferred_lft forever
31: eth0 at if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever
[root at b936e5d85214 /]#





3.       Following is the IP details at the Testmachine
-------------------------------------------------------

189: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 14:02:ec:70:ae:6d brd ff:ff:ff:ff:ff:ff
    inet 173.16.7.1/24 scope global enp3s0f1
       valid_lft forever preferred_lft forever
[root at testbng4 bnadagatti]#



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190611/bb7f8d36/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 16201 bytes
Desc: image002.jpg
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190611/bb7f8d36/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: setup_details_vhost-virtio.jpg
Type: image/jpeg
Size: 32468 bytes
Desc: setup_details_vhost-virtio.jpg
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190611/bb7f8d36/attachment-0003.jpg>


More information about the discuss mailing list