[ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS
Gregory Rose
gvrose8192 at gmail.com
Tue Nov 6 22:42:49 UTC 2018
On 11/6/2018 8:51 AM, Siva Teja ARETI wrote:
> Hi Greg,
>
> Thanks for looking into this.
>
> I have two VMs in my setup each with two interfaces. Trying to setup
> the VXLAN tunnels across these interfaces which are in different
> subnets. A docker container is attached to ovs bridge using ovs-docker
> utility on each VM and doing a ping from one container to another.
Hi Siva,
In reading through the documentation and looking at your configuration I
noticed that when using the
local_ip option the remote_ip is not set to flow. If the local_ip
option is specified then remote_ip must
equal flow.
From the documentation (man ovs-vswitchd.conf.db):
options : local_ip: optional string
Optional. The tunnel destination IP that received
packets must
match. Default is to match all addresses. If specified,
may be
one of:
· An IPv4/IPv6 address (not a DNS name), e.g.
192.168.12.3.
· The word flow. The tunnel accepts packets sent to
any of
the local IP addresses of the system running
OVS. To
process only packets sent to a specific IP
address, the
flow entries may match on the tun_dst or
tun_ipv6_dst
field. When sending packets to a local_ip=flow
tunnel,
the flow actions may explicitly set the
tun_src or
tun_ipv6_src field to the desired IP address, e.g.
with a
set_field action. However, while routing the
tunneled
packet out, the local system may override the
specified
address with the local IP address configured for
the out‐
going system interface.
This option is valid only for tunnels also
configured
with the remote_ip=flow option.
Please try using the remote_ip=flow option and then configuring the
proper flow and action.
Thanks,
- Greg
>
> *VM1 details:*
>
> [root at vm1 ~]# ip a
> .......
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 52:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.59/24 <http://30.30.0.59/24> brd 30.30.0.255 scope
> global dynamic eth1
> valid_lft 3002sec preferred_lft 3002sec
> inet6 fe80::5054:ff:feb8:5be/64 scope link
> valid_lft forever preferred_lft forever
> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 52:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.183/24 <http://20.20.0.183/24> brd 20.20.0.255 scope
> global dynamic eth2
> valid_lft 3248sec preferred_lft 3248sec
> inet6 fe80::5054:ff:fef0:6437/64 scope link
> valid_lft forever preferred_lft forever
> .......
> [root at vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="20.20.0.183", remote_ip="30.30.0.193"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
> [root at vm1 ~]# ip rule list
> 0: from all lookup local
> 32765: from 20.20.0.183 lookup siva
> 32766: from all lookup main
> 32767: from all lookup default
> [root at vm1 ~]# ip route show table siva
> default dev eth2 scope link src 20.20.0.183
> [root at vm1 ~]# ######################### A docker container is attached
> to ovs bridge using ovs-docker utility
> [root at vm1 ~]# docker ps
> CONTAINER ID IMAGE COMMAND CREATED
> STATUS PORTS NAMES
> be4ab434db99 busybox "sh" 5 days ago
> Up 5 days admiring_euclid
> [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` -- ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
> qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 9: eth0 at if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 22:98:41:0f:e8:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 70.70.0.10/24 <http://70.70.0.10/24> scope global eth0
> valid_lft forever preferred_lft forever
> inet6 fe80::2098:41ff:fe0f:e850/64 scope link
> valid_lft forever preferred_lft forever
>
>
> *VM2 details:*
> *
> *
> [root at vm2 ~]# ip a
> ........
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
> inet 30.30.0.193/24 <http://30.30.0.193/24> brd 30.30.0.255 scope
> global dynamic eth1
> valid_lft 2406sec preferred_lft 2406sec
> inet6 fe80::5054:ff:fe79:ef92/64 scope link
> valid_lft forever preferred_lft forever
> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 52:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
> inet 20.20.0.64/24 <http://20.20.0.64/24> brd 20.20.0.255 scope
> global dynamic eth2
> valid_lft 2775sec preferred_lft 2775sec
> inet6 fe80::5054:ff:fe05:937c/64 scope link
> valid_lft forever preferred_lft forever
> .......
> [root at vm2 ~]# ovs-vsctl show
> b85514db-3f29-4f7a-9001-37d70adfca34
> Bridge "testbr0"
> Port "gre0"
> Interface "gre0"
> type: gre
> options: {local_ip="30.30.0.193", remote_ip="20.20.0.183"}
> Port "a0769422cfc04_l"
> Interface "a0769422cfc04_l"
> Port "testbr0"
> Interface "testbr0"
> type: internal
> ovs_version: "2.9.2"
> [root at vm2 ~]# ip rule list
> 0: from all lookup local
> 32766: from all lookup main
> 32767: from all lookup default
> [root at vm2 ~]# ######################### A docker container is attached
> to ovs bridge using ovs-docker utility
> [root at vm2 ~]# docker ps
> CONTAINER ID IMAGE COMMAND CREATED
> STATUS PORTS NAMES
> 86214f0d99e8 busybox:latest "sh" 5 days ago Up 5 days
> peaceful_snyder
> [root at vm2 ~]# nsenter -n -t `docker inspect 862
> --format={{.State.Pid}}` -- ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN qlen 1
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN
> qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 9: eth0 at if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether ae:ac:14:7a:40:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 70.70.0.20/24 <http://70.70.0.20/24> scope global eth0
> valid_lft forever preferred_lft forever
> inet6 fe80::acac:14ff:fe7a:405f/64 scope link
> valid_lft forever preferred_lft forever
>
> With this configuration, if I do a ping from docker container on VM1
> to docker container on VM2 it works.
>
> [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` -- ping 70.70.0.20
> PING 70.70.0.20 (70.70.0.20) 56(84) bytes of data.
> 64 bytes from 70.70.0.20 <http://70.70.0.20>: icmp_seq=1 ttl=64
> time=0.831 ms
> 64 bytes from 70.70.0.20 <http://70.70.0.20>: icmp_seq=2 ttl=64
> time=0.933 ms
> 64 bytes from 70.70.0.20 <http://70.70.0.20>: icmp_seq=3 ttl=64
> time=0.564 ms
> ^C
> --- 70.70.0.20 ping statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 2001ms
> rtt min/avg/max/mdev = 0.564/0.776/0.933/0.155 ms
>
> And the traffic is as expected on VM2.
>
> [root at vm2 ~]# tcpdump -n -i any host 20.20.0.183
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on any, link-type LINUX_SLL (Linux cooked), capture size
> 262144 bytes
> 16:37:32.262553 IP 20.20.0.183 > 30.30.0.193 <http://30.30.0.193>:
> GREv0, length 102: IP 70.70.0.10 > 70.70.0.20 <http://70.70.0.20>:
> ICMP echo request, id 28158, seq 1, length 64
> 16:37:32.262835 IP 30.30.0.193 > 20.20.0.183 <http://20.20.0.183>:
> GREv0, length 102: IP 70.70.0.20 > 70.70.0.10 <http://70.70.0.10>:
> ICMP echo reply, id 28158, seq 1, length 64
> 16:37:33.263211 IP 20.20.0.183 > 30.30.0.193 <http://30.30.0.193>:
> GREv0, length 102: IP 70.70.0.10 > 70.70.0.20 <http://70.70.0.20>:
> ICMP echo request, id 28158, seq 2, length 64
> 16:37:33.263374 IP 30.30.0.193 > 20.20.0.183 <http://20.20.0.183>:
> GREv0, length 102: IP 70.70.0.20 > 70.70.0.10 <http://70.70.0.10>:
> ICMP echo reply, id 28158, seq 2, length 64
> 16:37:34.264159 IP 20.20.0.183 > 30.30.0.193 <http://30.30.0.193>:
> GREv0, length 102: IP 70.70.0.10 > 70.70.0.20 <http://70.70.0.20>:
> ICMP echo request, id 28158, seq 3, length 64
> 16:37:34.264252 IP 30.30.0.193 > 20.20.0.183 <http://20.20.0.183>:
> GREv0, length 102: IP 70.70.0.20 > 70.70.0.10 <http://70.70.0.10>:
> ICMP echo reply, id 28158, seq 3, length 64
>
> But when I change the tunnel type to vxlan, ping fails.
>
> [root at vm1 ~]# ovs-vsctl del-port testbr0 gre0
> [root at vm1 ~]# ovs-vsctl add-port testbr0 vxlan0 -- set interface
> vxlan0 type=vxlan options:local_ip=20.20.0.183
> options:remote_ip=30.30.0.193 options:dst_port=4789
> [root at vm1 ~]# ovs-vsctl show
> ff70c814-d1b0-4018-aee8-8b635187afee
> Bridge "testbr0"
> Port "testbr0"
> Interface "testbr0"
> type: internal
> Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> options: {dst_port="4789", local_ip="20.20.0.183",
> remote_ip="30.30.0.193"}
> Port "2cfb62a9b0f04_l"
> Interface "2cfb62a9b0f04_l"
> ovs_version: "2.9.2"
>
> [root at vm2 ~]# ovs-vsctl del-port testbr0 gre0
> [root at vm2 ~]# ovs-vsctl add-port testbr0 vxlan0 -- set interface
> vxlan0 type=vxlan options:local_ip=30.30.0.193
> options:remote_ip=20.20.0.183 options:dst_port=4789
> [root at vm2 ~]# ovs-vsctl show
> b85514db-3f29-4f7a-9001-37d70adfca34
> Bridge "testbr0"
> Port "a0769422cfc04_l"
> Interface "a0769422cfc04_l"
> Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> options: {dst_port="4789", local_ip="30.30.0.193",
> remote_ip="20.20.0.183"}
> Port "testbr0"
> Interface "testbr0"
> type: internal
> ovs_version: "2.9.2"
>
> Ping fails with this setup
>
> [root at vm1 ~]# nsenter -n -t `docker inspect be4
> --format={{.State.Pid}}` -- ping 70.70.0.20
> PING 70.70.0.20 (70.70.0.20) 56(84) bytes of data.
> ^C
> --- 70.70.0.20 ping statistics ---
> 6 packets transmitted, 0 received, 100% packet loss, time 4999ms
>
> Expected traffic is not seen on VM2
>
> [root at vm2 ~]# tcpdump -n -i any host 20.20.0.183
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on any, link-type LINUX_SLL (Linux cooked), capture size
> 262144 bytes
> ^C
> 0 packets captured
> 0 packets received by filter
> 0 packets dropped by kernel
>
> Kindly let me know if you need more information.
>
> Siva Teja.
>
> On Tue, Nov 6, 2018 at 10:49 AM Gregory Rose <gvrose8192 at gmail.com
> <mailto:gvrose8192 at gmail.com>> wrote:
>
>
> On 11/5/2018 6:10 PM, Siva Teja ARETI wrote:
>> Hi,
>>
>> I am trying to use local_ip option for a VXLAN tunnel using ovs
>> but it does not seem to work. The same works when I use GRE
>> tunnel. I also found a previous discussion from another user who
>> tried the exact same approach. Here is the link to the discussion
>>
>> _https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg03643.html_
>>
>> I am unable to find any working resolution at the end of this
>> discussion. Could you please help?
>
> I looked into that but was never able to set up a configuration
> like the one in that discussion and could
> not repro the bug.
>
> Please provide some details on your usage, configuration and steps
> to repro and I can look into it.
>
> Thanks,
>
> - Greg
>
>>
>> I am using ovs 2.9.2
>>
>> [root at localhost ~]# ovs-vsctl --version
>> ovs-vsctl (Open vSwitch) 2.9.2
>> DB Schema 7.15.1
>>
>> Thanks,
>> Siva Teja.
>>
>>
>> _______________________________________________
>> discuss mailing list
>> discuss at openvswitch.org <mailto:discuss at openvswitch.org>
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181106/eb3b0063/attachment-0001.html>
More information about the discuss
mailing list