[ovs-discuss] Behavior of netdev (dpdk) bridges with non-dpdkvhostuser ports

Géza Gémes geza.gemes at gmail.com
Thu Oct 13 17:28:50 UTC 2016


Hi,

Sorry for cross-posting, but I feel this might be an interesting topic for
the dev list as well.

I've recreated my setup with qemu VM instead of lxc container and the
situation is the same.
Summary of my setup:
Ovs compiled with dpdk (from ubuntu-cloud-archive/newton) deployed on a
libvirt VM having 3 network connections to a shared network:
[image: Szövegközi kép 3]
I've set up two bridges: dpdk-br0 with dpdk and non-dpdk-br1 with kernel
datapath each having an interface to the host provided network. For both
bridges only the normal action is present, no flows were defined. If the
qemu VM is connected to the kernel datapath bridge all traffic passes
between the host and the internal VM. If it is connected to the dpdk
bridge, udp and icmp traffic goes through, but tcp traffic does not.

Could you please explain this behavior?

Thank you in advance!

Cheers,

Geza

2016-10-11 21:18 GMT+02:00 Geza Gemes <geza.gemes at gmail.com>:

> On 10/11/2016 04:35 PM, Chandran, Sugesh wrote:
>
>>
>> /Regards/
>>
>> /_Sugesh/
>>
>> *From:*Geza Gemes [mailto:geza.gemes at gmail.com]
>> *Sent:* Tuesday, October 11, 2016 2:34 PM
>> *To:* Chandran, Sugesh <sugesh.chandran at intel.com>;
>> discuss at openvswitch.org
>> *Subject:* Re: [ovs-discuss] Behavior of netdev (dpdk) bridges with
>> non-dpdkvhostuser ports
>>
>> On 10/11/2016 10:45 AM, Chandran, Sugesh wrote:
>>
>>     Hi Geza,
>>
>>     /Regards/
>>
>>     /_Sugesh/
>>
>>     *From:*discuss [mailto:discuss-bounces at openvswitch.org] *On Behalf
>>     Of *Geza Gemes
>>     *Sent:* Sunday, October 9, 2016 5:30 PM
>>     *To:* discuss at openvswitch.org <mailto:discuss at openvswitch.org>
>>     *Subject:* [ovs-discuss] Behavior of netdev (dpdk) bridges with
>>
>>     non-dpdkvhostuser ports
>>
>>     Hi,
>>
>>     I've created a libvirt/KVM VM  with Ubuntu 16.04 for experimenting
>>     with OVS 2.6 and DPDK 16.07 (from ubuntu cloud archive, newton),
>>     it has a number of NICs, out of which I've kept one for a kernel
>>     (ens16) and one for a DPDK (dpdk0) datapath.
>>
>>     The bridge setup without virtual NICs:
>>
>>     #ovs-vsctl show
>>
>>     ef015869-3f47-45d4-af20-644f75208a92
>>
>>         Bridge "dpdk-br0"
>>
>>             Port "dpdk-br0"
>>
>>                 Interface "dpdk-br0"
>>
>>                     type: internal
>>
>>             Port "dpdk0"
>>
>>                 Interface "dpdk0"
>>
>>                     type: dpdk
>>
>>         Bridge "non-dpdk-br1"
>>
>>             Port "ens16"
>>
>>                 Interface "ens16"
>>
>>             Port "non-dpdk-br1"
>>
>>                 Interface "non-dpdk-br1"
>>
>>                     type: internal
>>
>>         ovs_version: "2.6.0"
>>
>>     I've created an lxc container (I'm using the lxc daily ppa) with
>>     the following config:
>>
>>     # grep -v ^# /var/lib/lxc/ovsdpdkbr/config
>>
>>     lxc.include = /usr/share/lxc/config/ubuntu.common.conf
>>
>>     lxc.rootfs = /var/lib/lxc/ovsdpdkbr/rootfs
>>
>>     lxc.rootfs.backend = dir
>>
>>     lxc.utsname = ovsdpdkbr
>>
>>     lxc.arch = amd64
>>
>>     lxc.network.type = veth
>>
>>     lxc.network.link = dpdk-br0
>>
>>     lxc.network.flags = up
>>
>>     lxc.network.hwaddr = 00:16:3e:0c:c3:69
>>
>>     lxc.network.veth.pair = veth-dpdk
>>
>>     Started the lxc container and expected to have no connectivity as
>>     lxc is not aware of the bridge being netdev:
>>
>>     # ovs-vsctl show
>>
>>     ef015869-3f47-45d4-af20-644f75208a92
>>
>>         Bridge "dpdk-br0"
>>
>>             Port "dpdk-br0"
>>
>>                 Interface "dpdk-br0"
>>
>>                     type: internal
>>
>>             Port veth-dpdk
>>
>>                 Interface veth-dpdk
>>
>>             Port "dpdk0"
>>
>>                 Interface "dpdk0"
>>
>>                     type: dpdk
>>
>>         Bridge "non-dpdk-br1"
>>
>>             Port "ens16"
>>
>>                 Interface "ens16"
>>
>>             Port "non-dpdk-br1"
>>
>>                 Interface "non-dpdk-br1"
>>
>>                     type: internal
>>
>>         ovs_version: "2.6.0"
>>
>>     The strange thing is, that it has got an IP address over eth0:
>>
>>     $ ifconfig eth0
>>
>>     eth0      Link encap:Ethernet  HWaddr 00:16:3e:0c:c3:69
>>
>>               inet addr:192.168.122.215  Bcast:192.168.122.255
>>      Mask:255.255.255.0
>>
>>               inet6 addr: fe80::216:3eff:fe0c:c369/64 Scope:Link
>>
>>               UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>
>>               RX packets:17 errors:0 dropped:0 overruns:0 frame:0
>>
>>               TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
>>
>>               collisions:0 txqueuelen:1000
>>
>>               RX bytes:2768 (2.7 KB)  TX bytes:1374 (1.3 KB)
>>
>>      and is able to ping the host:
>>
>>     ubuntu at ovsdpdkbr:~$ ping -c 1 192.168.122.1
>>
>>     PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
>>
>>     64 bytes from 192.168.122.1 <http://192.168.122.1>: icmp_seq=1
>>     ttl=64 time=0.290 ms
>>
>>     --- 192.168.122.1 ping statistics ---
>>
>>     1 packets transmitted, 1 received, 0% packet loss, time 0ms
>>
>>     rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms
>>
>>     And the host is also able to ping the container:
>>
>>     $ ping -c 1 192.168.122.215
>>
>>     PING 192.168.122.215 (192.168.122.215) 56(84) bytes of data.
>>
>>     64 bytes from 192.168.122.215 <http://192.168.122.215>: icmp_seq=1
>>     ttl=64 time=0.265 ms
>>
>>     --- 192.168.122.215 ping statistics ---
>>
>>     1 packets transmitted, 1 received, 0% packet loss, time 0ms
>>
>>     rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms
>>
>>     But while sshd listens in the container:
>>
>>     root at ovsdpdkbr:~# netstat -tunap
>>
>>     Active Internet connections (servers and established)
>>
>>     Proto Recv-Q Send-Q Local Address           Foreign Address
>>   State PID/Program name
>>
>>     tcp        0      0 0.0.0.0:22 <http://0.0.0.0:22>
>>  0.0.0.0:*               LISTEN  179/sshd
>>
>>     tcp6       0      0 :::22           :::*                    LISTEN
>>      179/sshd
>>
>>     udp        0      0 0.0.0.0:68 <http://0.0.0.0:68>
>>  0.0.0.0:* 147/dhclient
>>
>>     I cannot connect to it from the host:
>>
>>     ssh -v -v -v ubuntu at 192.168.122.215 <mailto:ubuntu at 192.168.122.215>
>>
>>     OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
>>
>>     debug1: Reading configuration data /home/egzagme/.ssh/config
>>
>>     debug1: /home/egzagme/.ssh/config line 1: Applying options for
>>     192.168.*.*
>>
>>     debug1: Reading configuration data /etc/ssh/ssh_config
>>
>>     debug1: /etc/ssh/ssh_config line 19: Applying options for *
>>
>>     debug2: ssh_connect: needpriv 0
>>
>>     debug1: Connecting to 192.168.122.215 [192.168.122.215] port 22.
>>
>>     Looks like ICMP and UDP packets go through somehow, but TCP does not.
>>
>>     */[Sugesh] Did you configure any static rules or its just normal
>>     action in OVS?/*
>>
>>     */Can you please confirm what rules are being installed in the OVS
>>     datapath? /*
>>
>>     */Have you tried ssh in container with different port than 22?/*
>>
>>     */Also is there any iptable rules that present in the host?/*
>>
>>     *//*
>>
>>
>>     Could someone please explain the observed behavior?
>>
>>     Thank you in advance!
>>
>>     Cheers,
>>
>>     Geza
>>
>> Hi Sugesh,
>>
>> The packets shall be subject to the normal action only:
>>
>> # ovs-ofctl show dpdk-br0
>> OFPT_FEATURES_REPLY (xid=0x2): dpid:00008e3a9f11d64e
>> n_tables:254, n_buffers:256
>> capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
>> actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
>> mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
>>  1(dpdk0): addr:52:54:00:03:64:b1
>>      config:     0
>>      state:      0
>>      current:    10GB-FD
>>      advertised: FIBER
>>      supported:  1GB-HD 1GB-FD 10GB-FD COPPER FIBER AUTO_PAUSE
>>      peer:       10MB-FD 100MB-HD 100MB-FD 10GB-FD COPPER
>>      speed: 10000 Mbps now, 10000 Mbps max
>>  LOCAL(dpdk-br0): addr:8e:3a:9f:11:d6:4e
>>      config:     PORT_DOWN
>>      state:      LINK_DOWN
>>      current:    10MB-FD COPPER
>>      speed: 10 Mbps now, 0 Mbps max
>> OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
>>
>> I have no iptables set up on the host:
>>
>> # iptables -L
>> Chain INPUT (policy ACCEPT)
>> target     prot opt source               destination
>> ACCEPT     tcp  --  anywhere anywhere             tcp dpt:domain
>> ACCEPT     udp  --  anywhere anywhere             udp dpt:domain
>> ACCEPT     tcp  --  anywhere anywhere             tcp dpt:bootps
>> ACCEPT     udp  --  anywhere anywhere             udp dpt:bootps
>>
>> Chain FORWARD (policy ACCEPT)
>> target     prot opt source               destination
>> ACCEPT     all  --  anywhere             anywhere
>> ACCEPT     all  --  anywhere             anywhere
>>
>> Chain OUTPUT (policy ACCEPT)
>> target     prot opt source               destination
>>
>> On the other hand I have an other bridge, with kernel datapath and there
>> connectivity of lxc containers works as expected. My question is how
>> communication between an bridge with dpdk datapath and an lxc container not
>> having a dpdkvhostuser port and not being backed by hugepages is supposed
>>
>> */[Sugesh] The ovs-vswitchd main thread will taken care of the kernel
>> interfaces for the packet forwarding. The PMD threads are not involved in
>> this packet handling, so hugepages./*
>>
>> */I am not very familiar with lxc OVS network setup. Definitely OVS
>> forwards packets based on the configured rules when they  landed in either
>> through kernel or DPDK managed interface. /*
>>
>> */You could verify whats happening in the ovs-dpdk data path using
>> following commands/*
>>
>> *//*
>>
>> */ovs-appctcl dpctl/show –s netdev at ovs-netdev (watch these  to see the
>> port stats)/*
>>
>> */ovs-appctl dpctl/dump-flows netdev at ovs-netdev/*
>>
>> *//*
>>
>> */Please have a look at the route table and arp in host for more
>> debugging./*
>>
>> */In the above setup, /*192.168.122.1 */is the ip address assigned to the
>> kernel interface ens16? So any packet destined for that subnet will forward
>> to that interface than OVS? Is this assumption correct?/*
>>
>> *//*
>>
>> to work?
>>
>> Thank you!
>>
>> Cheers,
>>
>> Geza
>>
>> Hi,
>
> My network setup looks like:
>
>
> Where the host network with the dhcp server listening at 192.168.122.1 has
> 192.168.122.0/24
>
> Ovs command:
>
> # ovs-appctl dpctl/show –s netdev at ovs-netdev
> ovs-vswitchd: opening datapath –s failed (No such device)
> netdev at ovs-netdev:
>     lookups: hit:376 missed:21 lost:0
>     flows: 4
>     port 0: ovs-netdev (tap)
>     port 1: dpdk0 (dpdk: configured_rx_queues=1, configured_tx_queues=1,
> mtu=1500, requested_rx_queues=1, requested_tx_queues=3)
>     port 2: dpdk-br0 (tap)
>     port 3: veth-dpdk
> ovs-appctl: ovs-vswitchd: server returned an error
>
> suggest some kind of error. Arp table in the VM contains the host. I've
> also tried with ens16 removed from the kernel datapath bridge but with no
> difference.
>
> Tomorrow I'll retry the setup with a qemu VM instead of an lxc container.
>
> Cheers,
>
> Geza
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20161013/f9d9c7a4/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 64802 bytes
Desc: not available
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20161013/f9d9c7a4/attachment-0002.png>


More information about the discuss mailing list