[ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

Siva Teja ARETI siva_teja.areti at nuagenetworks.net
Tue Nov 6 19:09:23 UTC 2018


Answers in line.

Siva Teja.

On Tue, Nov 6, 2018 at 1:56 PM Flavio Leitner <fbl at sysclose.org> wrote:

> On Tue, Nov 06, 2018 at 11:51:49AM -0500, Siva Teja ARETI wrote:
> > Hi Greg,
> >
> > Thanks for looking into this.
> >
> > I have two VMs in my setup each with two interfaces. Trying to setup the
> > VXLAN tunnels across these interfaces which are in different subnets. A
> > docker container is attached to ovs bridge using ovs-docker utility on
> each
> > VM and doing a ping from one container to another.
>
> Do you see any interesting related messages in 'dmesg' output or in
> ovs-vswitchd.log?
>

I could not find any interesting messages in dmesg or in ovs-vswitchd.log
output.


> If I recall correctly, the "ip l" should show the vxlan dev named
> vxlan_sys_<port>
>

Yes. I can see the dev on both of my VMs

[root at vm1 ~]# ifconfig vxlan_sys_4789
vxlan_sys_4789: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 65000
        inet6 fe80::2a:28ff:fed2:d4f6  prefixlen 64  scopeid 0x20<link>
        ether 02:2a:28:d2:d4:f6  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 48  bytes 1680 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



> fbl
>
> >
> > *VM1 details:*
> >
> > [root at vm1 ~]# ip a
> > .......
> > 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >     link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> >     inet 30.30.0.59/24 brd 30.30.0.255 scope global dynamic eth1
> >        valid_lft 3002sec preferred_lft 3002sec
> >     inet6 fe80::5054:ff:feb8:5be/64 scope link
> >        valid_lft forever preferred_lft forever
> > 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >     link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> >     inet 20.20.0.183/24 brd 20.20.0.255 scope global dynamic eth2
> >        valid_lft 3248sec preferred_lft 3248sec
> >     inet6 fe80::5054:ff:fef0:6437/64 scope link
> >        valid_lft forever preferred_lft forever
> > .......
> > [root at vm1 ~]# ovs-vsctl show
> > ff70c814-d1b0-4018-aee8-8b635187afee
> >     Bridge "testbr0"
> >         Port "gre0"
> >             Interface "gre0"
> >                 type: gre
> >                 options: {local_ip="20.20.0.183",
> remote_ip="30.30.0.193"}
> >         Port "testbr0"
> >             Interface "testbr0"
> >                 type: internal
> >         Port "2cfb62a9b0f04_l"
> >             Interface "2cfb62a9b0f04_l"
> >     ovs_version: "2.9.2"
> > [root at vm1 ~]# ip rule list
> > 0:      from all lookup local
> > 32765:  from 20.20.0.183 lookup siva
> > 32766:  from all lookup main
> > 32767:  from all lookup default
> > [root at vm1 ~]# ip route show table siva
> > default dev eth2 scope link src 20.20.0.183
> > [root at vm1 ~]# ######################### A docker container is attached
> to
> > ovs bridge using ovs-docker utility
> > [root at vm1 ~]# docker ps
> > CONTAINER ID        IMAGE               COMMAND             CREATED
> >      STATUS              PORTS               NAMES
> > be4ab434db99        busybox             "sh"                5 days ago
> >     Up 5 days                               admiring_euclid
> > [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` --
> > ip a
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen
> 1
> >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >     inet 127.0.0.1/8 scope host lo
> >        valid_lft forever preferred_lft forever
> >     inet6 ::1/128 scope host
> >        valid_lft forever preferred_lft forever
> > 2: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
> >     link/gre 0.0.0.0 brd 0.0.0.0
> > 3: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
> qlen
> > 1000
> >     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > 9: eth0 at if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > state UP qlen 1000
> >     link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >     inet 70.70.0.10/24 scope global eth0
> >        valid_lft forever preferred_lft forever
> >     inet6 fe80::2098:41ff:fe0f:e850/64 scope link
> >        valid_lft forever preferred_lft forever
> >
> >
> > *VM2 details:*
> >
> > [root at vm2 ~]# ip a
> > ........
> > 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >     link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> >     inet 30.30.0.193/24 brd 30.30.0.255 scope global dynamic eth1
> >        valid_lft 2406sec preferred_lft 2406sec
> >     inet6 fe80::5054:ff:fe79:ef92/64 scope link
> >        valid_lft forever preferred_lft forever
> > 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >     link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> >     inet 20.20.0.64/24 brd 20.20.0.255 scope global dynamic eth2
> >        valid_lft 2775sec preferred_lft 2775sec
> >     inet6 fe80::5054:ff:fe05:937c/64 scope link
> >        valid_lft forever preferred_lft forever
> > .......
> > [root at vm2 ~]# ovs-vsctl show
> > b85514db-3f29-4f7a-9001-37d70adfca34
> >     Bridge "testbr0"
> >         Port "gre0"
> >             Interface "gre0"
> >                 type: gre
> >                 options: {local_ip="30.30.0.193",
> remote_ip="20.20.0.183"}
> >         Port "a0769422cfc04_l"
> >             Interface "a0769422cfc04_l"
> >         Port "testbr0"
> >             Interface "testbr0"
> >                 type: internal
> >     ovs_version: "2.9.2"
> > [root at vm2 ~]# ip rule list
> > 0:      from all lookup local
> > 32766:  from all lookup main
> > 32767:  from all lookup default
> > [root at vm2 ~]# ######################### A docker container is attached
> to
> > ovs bridge using ovs-docker utility
> > [root at vm2 ~]# docker ps
> > CONTAINER ID        IMAGE               COMMAND             CREATED
> >      STATUS              PORTS               NAMES
> > 86214f0d99e8        busybox:latest      "sh"                5 days ago
> >     Up 5 days                               peaceful_snyder
> > [root at vm2 ~]# nsenter -n -t `docker inspect 862
> --format={{.State.Pid}}` --
> > ip a
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen
> 1
> >     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >     inet 127.0.0.1/8 scope host lo
> >        valid_lft forever preferred_lft forever
> >     inet6 ::1/128 scope host
> >        valid_lft forever preferred_lft forever
> > 2: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
> >     link/gre 0.0.0.0 brd 0.0.0.0
> > 3: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
> qlen
> > 1000
> >     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> > 9: eth0 at if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > state UP qlen 1000
> >     link/ether ae:ac:14:7a:40:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
> >     inet 70.70.0.20/24 scope global eth0
> >        valid_lft forever preferred_lft forever
> >     inet6 fe80::acac:14ff:fe7a:405f/64 scope link
> >        valid_lft forever preferred_lft forever
> >
> > With this configuration, if I do a ping from docker container on VM1 to
> > docker container on VM2 it works.
> >
> > [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` --
> > ping 70.70.0.20
> > PING 70.70.0.20 (70.70.0.20) 56(84) bytes of data.
> > 64 bytes from 70.70.0.20: icmp_seq=1 ttl=64 time=0.831 ms
> > 64 bytes from 70.70.0.20: icmp_seq=2 ttl=64 time=0.933 ms
> > 64 bytes from 70.70.0.20: icmp_seq=3 ttl=64 time=0.564 ms
> > ^C
> > --- 70.70.0.20 ping statistics ---
> > 3 packets transmitted, 3 received, 0% packet loss, time 2001ms
> > rtt min/avg/max/mdev = 0.564/0.776/0.933/0.155 ms
> >
> > And the traffic is as expected on VM2.
> >
> > [root at vm2 ~]# tcpdump -n -i any host 20.20.0.183
> >
> >
> > tcpdump: verbose output suppressed, use -v or -vv for full protocol
> decode
> > listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144
> > bytes
> > 16:37:32.262553 IP 20.20.0.183 > 30.30.0.193: GREv0, length 102: IP
> > 70.70.0.10 > 70.70.0.20: ICMP echo request, id 28158, seq 1, length 64
> > 16:37:32.262835 IP 30.30.0.193 > 20.20.0.183: GREv0, length 102: IP
> > 70.70.0.20 > 70.70.0.10: ICMP echo reply, id 28158, seq 1, length 64
> > 16:37:33.263211 IP 20.20.0.183 > 30.30.0.193: GREv0, length 102: IP
> > 70.70.0.10 > 70.70.0.20: ICMP echo request, id 28158, seq 2, length 64
> > 16:37:33.263374 IP 30.30.0.193 > 20.20.0.183: GREv0, length 102: IP
> > 70.70.0.20 > 70.70.0.10: ICMP echo reply, id 28158, seq 2, length 64
> > 16:37:34.264159 IP 20.20.0.183 > 30.30.0.193: GREv0, length 102: IP
> > 70.70.0.10 > 70.70.0.20: ICMP echo request, id 28158, seq 3, length 64
> > 16:37:34.264252 IP 30.30.0.193 > 20.20.0.183: GREv0, length 102: IP
> > 70.70.0.20 > 70.70.0.10: ICMP echo reply, id 28158, seq 3, length 64
> >
> > But when I change the tunnel type to vxlan, ping fails.
> >
> > [root at vm1 ~]# ovs-vsctl del-port testbr0 gre0
> > [root at vm1 ~]# ovs-vsctl add-port testbr0 vxlan0 -- set interface vxlan0
> > type=vxlan options:local_ip=20.20.0.183 options:remote_ip=30.30.0.193
> > options:dst_port=4789
> > [root at vm1 ~]# ovs-vsctl show
> > ff70c814-d1b0-4018-aee8-8b635187afee
> >     Bridge "testbr0"
> >         Port "testbr0"
> >             Interface "testbr0"
> >                 type: internal
> >         Port "vxlan0"
> >             Interface "vxlan0"
> >                 type: vxlan
> >                 options: {dst_port="4789", local_ip="20.20.0.183",
> > remote_ip="30.30.0.193"}
> >         Port "2cfb62a9b0f04_l"
> >             Interface "2cfb62a9b0f04_l"
> >     ovs_version: "2.9.2"
> >
> > [root at vm2 ~]# ovs-vsctl del-port testbr0 gre0
> > [root at vm2 ~]# ovs-vsctl add-port testbr0 vxlan0 -- set interface vxlan0
> > type=vxlan options:local_ip=30.30.0.193 options:remote_ip=20.20.0.183
> > options:dst_port=4789
> > [root at vm2 ~]# ovs-vsctl show
> > b85514db-3f29-4f7a-9001-37d70adfca34
> >     Bridge "testbr0"
> >         Port "a0769422cfc04_l"
> >             Interface "a0769422cfc04_l"
> >         Port "vxlan0"
> >             Interface "vxlan0"
> >                 type: vxlan
> >                 options: {dst_port="4789", local_ip="30.30.0.193",
> > remote_ip="20.20.0.183"}
> >         Port "testbr0"
> >             Interface "testbr0"
> >                 type: internal
> >     ovs_version: "2.9.2"
> >
> > Ping fails with this setup
> >
> > [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` --
> > ping 70.70.0.20
> > PING 70.70.0.20 (70.70.0.20) 56(84) bytes of data.
> > ^C
> > --- 70.70.0.20 ping statistics ---
> > 6 packets transmitted, 0 received, 100% packet loss, time 4999ms
> >
> > Expected traffic is not seen on VM2
> >
> > [root at vm2 ~]# tcpdump -n -i any host 20.20.0.183
> >
> >
> > tcpdump: verbose output suppressed, use -v or -vv for full protocol
> decode
> > listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144
> > bytes
> > ^C
> > 0 packets captured
> > 0 packets received by filter
> > 0 packets dropped by kernel
> >
> > Kindly let me know if you need more information.
> >
> > Siva Teja.
> >
> > On Tue, Nov 6, 2018 at 10:49 AM Gregory Rose <gvrose8192 at gmail.com>
> wrote:
> >
> > >
> > > On 11/5/2018 6:10 PM, Siva Teja ARETI wrote:
> > >
> > > Hi,
> > >
> > > I am trying to use local_ip option for a VXLAN tunnel using ovs but it
> > > does not seem to work. The same works when I use GRE tunnel. I also
> found a
> > > previous discussion from another user who tried the exact same
> approach.
> > > Here is the link to the discussion
> > >
> > > *
> https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg03643.html
> > > <
> https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg03643.html>*
> > >
> > > I am unable to find any working resolution at the end of this
> discussion.
> > > Could you please help?
> > >
> > >
> > > I looked into that but was never able to set up a configuration like
> the
> > > one in that discussion and could
> > > not repro the bug.
> > >
> > > Please provide some details on your usage, configuration and steps to
> > > repro and I can look into it.
> > >
> > > Thanks,
> > >
> > > - Greg
> > >
> > >
> > > I am using ovs 2.9.2
> > >
> > > [root at localhost ~]# ovs-vsctl --version
> > > ovs-vsctl (Open vSwitch) 2.9.2
> > > DB Schema 7.15.1
> > >
> > > Thanks,
> > > Siva Teja.
> > >
> > >
> > > _______________________________________________
> > > discuss mailing listdiscuss at openvswitch.orghttps://
> mail.openvswitch.org/mailman/listinfo/ovs-discuss
> > >
> > >
> > >
>
> > _______________________________________________
> > discuss mailing list
> > discuss at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
> --
> Flavio
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181106/b5777ad2/attachment-0001.html>


More information about the discuss mailing list