[ovs-discuss] Behavior of netdev (dpdk) bridges with non-dpdkvhostuser ports
Chandran, Sugesh
sugesh.chandran at intel.com
Tue Oct 11 08:45:58 UTC 2016
Hi Geza,
Regards
_Sugesh
From: discuss [mailto:discuss-bounces at openvswitch.org] On Behalf Of Geza Gemes
Sent: Sunday, October 9, 2016 5:30 PM
To: discuss at openvswitch.org
Subject: [ovs-discuss] Behavior of netdev (dpdk) bridges with non-dpdkvhostuser ports
Hi,
I've created a libvirt/KVM VM with Ubuntu 16.04 for experimenting with OVS 2.6 and DPDK 16.07 (from ubuntu cloud archive, newton), it has a number of NICs, out of which I've kept one for a kernel (ens16) and one for a DPDK (dpdk0) datapath.
The bridge setup without virtual NICs:
#ovs-vsctl show
ef015869-3f47-45d4-af20-644f75208a92
Bridge "dpdk-br0"
Port "dpdk-br0"
Interface "dpdk-br0"
type: internal
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Bridge "non-dpdk-br1"
Port "ens16"
Interface "ens16"
Port "non-dpdk-br1"
Interface "non-dpdk-br1"
type: internal
ovs_version: "2.6.0"
I've created an lxc container (I'm using the lxc daily ppa) with the following config:
# grep -v ^# /var/lib/lxc/ovsdpdkbr/config
lxc.include = /usr/share/lxc/config/ubuntu.common.conf
lxc.rootfs = /var/lib/lxc/ovsdpdkbr/rootfs
lxc.rootfs.backend = dir
lxc.utsname = ovsdpdkbr
lxc.arch = amd64
lxc.network.type = veth
lxc.network.link = dpdk-br0
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:0c:c3:69
lxc.network.veth.pair = veth-dpdk
Started the lxc container and expected to have no connectivity as lxc is not aware of the bridge being netdev:
# ovs-vsctl show
ef015869-3f47-45d4-af20-644f75208a92
Bridge "dpdk-br0"
Port "dpdk-br0"
Interface "dpdk-br0"
type: internal
Port veth-dpdk
Interface veth-dpdk
Port "dpdk0"
Interface "dpdk0"
type: dpdk
Bridge "non-dpdk-br1"
Port "ens16"
Interface "ens16"
Port "non-dpdk-br1"
Interface "non-dpdk-br1"
type: internal
ovs_version: "2.6.0"
The strange thing is, that it has got an IP address over eth0:
$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:16:3e:0c:c3:69
inet addr:192.168.122.215 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe0c:c369/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2768 (2.7 KB) TX bytes:1374 (1.3 KB)
and is able to ping the host:
ubuntu at ovsdpdkbr:~$ ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1<http://192.168.122.1>: icmp_seq=1 ttl=64 time=0.290 ms
--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms
And the host is also able to ping the container:
$ ping -c 1 192.168.122.215
PING 192.168.122.215 (192.168.122.215) 56(84) bytes of data.
64 bytes from 192.168.122.215<http://192.168.122.215>: icmp_seq=1 ttl=64 time=0.265 ms
--- 192.168.122.215 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms
But while sshd listens in the container:
root at ovsdpdkbr:~# netstat -tunap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22<http://0.0.0.0:22> 0.0.0.0:* LISTEN 179/sshd
tcp6 0 0 :::22 :::* LISTEN 179/sshd
udp 0 0 0.0.0.0:68<http://0.0.0.0:68> 0.0.0.0:* 147/dhclient
I cannot connect to it from the host:
ssh -v -v -v ubuntu at 192.168.122.215<mailto:ubuntu at 192.168.122.215>
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /home/egzagme/.ssh/config
debug1: /home/egzagme/.ssh/config line 1: Applying options for 192.168.*.*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.122.215 [192.168.122.215] port 22.
Looks like ICMP and UDP packets go through somehow, but TCP does not.
[Sugesh] Did you configure any static rules or its just normal action in OVS?
Can you please confirm what rules are being installed in the OVS datapath?
Have you tried ssh in container with different port than 22?
Also is there any iptable rules that present in the host?
Could someone please explain the observed behavior?
Thank you in advance!
Cheers,
Geza
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://openvswitch.org/pipermail/ovs-discuss/attachments/20161011/49aacc60/attachment-0002.html>
More information about the discuss
mailing list