[ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS
Gregory Rose
gvrose8192 at gmail.com
Mon Dec 17 23:03:05 UTC 2018
On 12/17/2018 11:50 AM, Siva Teja ARETI wrote:
> Thanks Greg for explaining the correct way to do.
>
> Siva Teja.
Glad to help and your case also drove me to finally pin down how the
local_ip option can work. TBH I'd never
really taken the time to understand a working setup and I was able to
increase my own knowledge here as
well.
So thank you for that!
- Greg
>
> On Fri, Nov 30, 2018 at 12:55 PM Gregory Rose <gvrose8192 at gmail.com
> <mailto:gvrose8192 at gmail.com>> wrote:
>
>
>
> On 11/28/2018 3:15 PM, Siva Teja ARETI wrote:
>> Hi Greg,
>>
>> Please find the answers inline below.
>>
>> On Tue, Nov 27, 2018 at 1:35 PM Gregory Rose
>> <gvrose8192 at gmail.com <mailto:gvrose8192 at gmail.com>> wrote:
>>
>> Siva,
>>
>> You have a routing issue.
>>
>> See interalia
>> https://github.com/OpenNebula/one/issues/2161
>> http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html
>> http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html
>>
>> For this to work you must be able to ping from the local IP
>> to the remote IP *through* the remote IP address.As we have
>> seen that doesn't work.
>>
>>
>> Did you mean to be able to ping using remote interface? I am able
>> to get this to work when I connect the two bridges using a veth pair.
>>
>> [root at vm1 ~]# ping 30.30.0.193 -I eth2
>> PING 30.30.0.193 (30.30.0.193) from 20.20.0.183 eth2: 56(84)
>> bytes of data.
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=1 ttl=64
>> time=0.655 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=2 ttl=64
>> time=0.574 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=3 ttl=64
>> time=0.600 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=4 ttl=64
>> time=0.604 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=5 ttl=64
>> time=0.607 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=6 ttl=64
>> time=0.620 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=7 ttl=64
>> time=0.466 ms
>> 64 bytes from 30.30.0.193 <http://30.30.0.193>: icmp_seq=8 ttl=64
>> time=0.623 ms
>> ^C
>> --- 30.30.0.193 ping statistics ---
>> 8 packets transmitted, 8 received, 0% packet loss, time 7000ms
>> rtt min/avg/max/mdev = 0.466/0.593/0.655/0.059 ms
>> Even with this routing setup, the local_ip option with vxlan
>> tunnels does not seem to work and GRE tunnels work.
>
> So what you did there with the veth pair is not routing, it's
> bridging.
>
>>
>> As an aside, why do you have two bridges to the same VMs?
>> Your configuration makes it impossible to
>> set a route because you have two sets of IP addresses and
>> routes all on two bridges going into the same
>> VMs. In that configuration the local ip option makes no
>> sense. You don't need it - you're already bridged.
>>
>>
>> I was to trying to mimic a use case with two hypervisors and each
>> hypervisor is connected to two different underlay networks. So,
>> used linux bridges when imitated the topology with VMs. Please
>> advice if this is not the right approach.
>
> I don't see how that can work - there does not seem to be enough
> isolation. The VMs are still connected to
> a single hypervisor and they're all bridged, not routed.
>
>>
>> I understand that you have seen the gre configuration work
>> and I'm not sure why because it has the same
>> requirements for the local ip to be routable through the
>> remote ip. And again, there is no point to the
>> local ip option because the ip addresses do not need to be
>> routed to reach each other.
>>
>> In any case, I'm going to set up a valid configuration and
>> then make sure that the local ip option does work
>> or not. I'll report back when I'm done.
>>
>>
>> I will look out for your conclusions.
>>
>
> So I have gotten both gre and vxlan to work with the local_ip option.
>
> Below is my setup for vxlan. The one for gre is identical except
> it is gre tunneling instead of vxlan tunneling.
> I've highlighted in red notable configurations and IP addresses.
> With this setup I can do this:
>
> From Machine B to Machine A:
> # ip netns exec ns0 ping 10.1.1.1
> PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=1 ttl=64
> time=0.966 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=2 ttl=64
> time=0.128 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=3 ttl=64
> time=0.116 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=4 ttl=64
> time=0.113 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=5 ttl=64
> time=0.155 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=6 ttl=64
> time=0.124 ms
> 64 bytes from 10.1.1.1 <http://10.1.1.1>: icmp_seq=7 ttl=64
> time=0.133 ms
>
> As you can see the vxlan tunnel with local_ip option works fine
> when the base configuration is done
> correctly. I think a lot of confusion in this case has been
> between bridging and routing. They are
> really separate concepts.
>
> I hope this helps.
>
> Thanks,
>
> - Greg
>
> Setup follows:
>
> Machine A:
> # ovs-vsctl show
> e4490ab5-ba93-4291-8a4f-c6f71292310b
> Bridge br-test
> * Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> **options: {key="100", local_ip="201.20.20.1", remote_ip="200.0.0.2"}*
> Port "p1"
> Interface "p1"
> Port br-test
> Interface br-test
> type: internal
> Bridge "br0"
> Port "br0-peer"
> Interface "br0-peer"
> type: patch
> options: {peer="br1-peer"}
> Port "em2"
> Interface "em2"
> Port "br0"
> Interface "br0"
> type: internal
> Bridge "br1"
> Port "br1-peer"
> Interface "br1-peer"
> type: patch
> options: {peer="br0-peer"}
> Port "br1"
> Interface "br1"
> type: internal
> Port br-test-patch
> Interface br-test-patch
> type: patch
> options: {peer="br1-patch"}
> ovs_version: "2.10.90"
>
> # ip addr show
> 5: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovs-system state UP group default qlen 1000
> link/ether 24:6e:96:4a:f2:90 brd ff:ff:ff:ff:ff:ff
> 12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether 24:6e:96:4a:f2:90 brd ff:ff:ff:ff:ff:ff
> * inet 201.20.20.1/24 <http://201.20.20.1/24> scope global br0*
> valid_lft forever preferred_lft forever
> inet6 fd01:1:3:1500:266e:96ff:fe4a:f290/64 scope global
> mngtmpaddr dynamic
> valid_lft forever preferred_lft forever
> inet6 fe80::266e:96ff:fe4a:f290/64 scope link
> valid_lft forever preferred_lft forever
> 14: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether 6a:f6:c5:75:3f:44 brd ff:ff:ff:ff:ff:ff
> inet 201.20.20.9/24 <http://201.20.20.9/24> scope global br1
> valid_lft forever preferred_lft forever
> inet6 fd01:1:3:1500:68f6:c5ff:fe75:3f44/64 scope global
> mngtmpaddr dynamic
> valid_lft forever preferred_lft forever
> inet6 fe80::68f6:c5ff:fe75:3f44/64 scope link
> valid_lft forever preferred_lft forever
> 18: p1 at if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue master ovs-system state UP group default qlen 1000
> link/ether c2:00:b3:6c:d4:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet6 fe80::c000:b3ff:fe6c:d408/64 scope link
> valid_lft forever preferred_lft forever
> 23: br-test: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> group default qlen 1000
> link/ether 9a:61:c4:03:30:46 brd ff:ff:ff:ff:ff:ff
> 25: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470
> qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
> link/ether 2e:a5:e4:4c:38:0f brd ff:ff:ff:ff:ff:ff
> inet6 fe80::2ca5:e4ff:fe4c:380f/64 scope link
> valid_lft forever preferred_lft forever
>
> # ip route show
> default via 10.172.211.253 dev em1 proto dhcp metric 100
> 10.172.208.0/22 <http://10.172.208.0/22> dev em1 proto kernel
> scope link src 10.172.208.214 metric 100
> 192.168.122.0/24 <http://192.168.122.0/24> dev virbr0 proto kernel
> scope link src 192.168.122.1
> *200.0.0.0/24 <http://200.0.0.0/24> via 201.20.20.1 dev br0*
> 201.20.20.0/24 <http://201.20.20.0/24> dev br0 proto kernel scope
> link src 201.20.20.1
> 201.20.20.0/24 <http://201.20.20.0/24> dev br1 proto kernel scope
> link src 201.20.20.9
>
> # ip netns exec ns0 ip addr show
> 19: v1 at if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 16:14:b4:4e:06:8a brd ff:ff:ff:ff:ff:ff link-netnsid 0
> * inet 10.1.1.1/24 <http://10.1.1.1/24> scope global v1*
> valid_lft forever preferred_lft forever
> inet6 fe80::1414:b4ff:fe4e:68a/64 scope link
> valid_lft forever preferred_lft forever
>
> Machine B:
> # ovs-vsctl show
> 021ce205-1cb1-441e-af92-f0316fe68f80
> Bridge "br1"
> Port "br1-peer"
> Interface "br1-peer"
> type: patch
> options: {peer="br0-peer"}
> Port "br1"
> Interface "br1"
> type: internal
> Port br-test-patch
> Interface br-test-patch
> type: patch
> options: {peer="br1-patch"}
> Bridge "br0"
> Port "em2"
> Interface "em2"
> Port "br0-peer"
> Interface "br0-peer"
> type: patch
> options: {peer="br1-peer"}
> Port "br0"
> Interface "br0"
> type: internal
> Bridge br-test
> * Port "vxlan0"
> Interface "vxlan0"
> type: vxlan
> options: {key="100", local_ip="200.0.0.2",
> remote_ip="201.20.20.1"}
> * Port br-test
> Interface br-test
> type: internal
> Port "p1"
> Interface "p1"
> ovs_version: "2.10.90"
>
> # ip addr show
> 5: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovs-system state UP group default qlen 1000
> link/ether 24:6e:96:4a:ec:b8 brd ff:ff:ff:ff:ff:ff
> 12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether 24:6e:96:4a:ec:b8 brd ff:ff:ff:ff:ff:ff
> * inet 200.0.0.2/24 <http://200.0.0.2/24> scope global br0*
> valid_lft forever preferred_lft forever
> inet6 fd01:1:3:1500:266e:96ff:fe4a:ecb8/64 scope global
> mngtmpaddr dynamic
> valid_lft forever preferred_lft forever
> inet6 fe80::266e:96ff:fe4a:ecb8/64 scope link
> valid_lft forever preferred_lft forever
> 14: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether 7a:fd:5c:43:fc:48 brd ff:ff:ff:ff:ff:ff
> inet 200.0.0.9/24 <http://200.0.0.9/24> scope global br1
> valid_lft forever preferred_lft forever
> inet6 fd01:1:3:1500:78fd:5cff:fe43:fc48/64 scope global
> mngtmpaddr dynamic
> valid_lft forever preferred_lft forever
> inet6 fe80::78fd:5cff:fe43:fc48/64 scope link
> valid_lft forever preferred_lft forever
> 18: p1 at if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue master ovs-system state UP group default qlen 1000
> link/ether 92:c3:d0:65:82:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet6 fe80::90c3:d0ff:fe65:820d/64 scope link
> valid_lft forever preferred_lft forever
> 23: br-test: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> group default qlen 1000
> link/ether 5a:fc:3c:e9:1d:44 brd ff:ff:ff:ff:ff:ff
> 25: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470
> qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
> link/ether de:dd:e8:9a:88:a3 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::dcdd:e8ff:fe9a:88a3/64 scope link
> valid_lft forever preferred_lft forever
>
> # ip route show
> default via 10.172.211.253 dev em1 proto dhcp metric 100
> 10.172.208.0/22 <http://10.172.208.0/22> dev em1 proto kernel
> scope link src 10.172.208.215 metric 100
> 192.168.122.0/24 <http://192.168.122.0/24> dev virbr0 proto kernel
> scope link src 192.168.122.1
> 200.0.0.0/24 <http://200.0.0.0/24> dev br0 proto kernel scope link
> src 200.0.0.2
> 200.0.0.0/24 <http://200.0.0.0/24> dev br1 proto kernel scope link
> src 200.0.0.9
> *201.20.20.0/24 <http://201.20.20.0/24> via 200.0.0.2 dev br0*
>
> # ip netns exec ns0 ip addr show
> 19: v1 at if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
> link/ether 6e:bd:8e:8c:e9:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> * inet 10.1.1.2/24 <http://10.1.1.2/24> scope global v1*
> valid_lft forever preferred_lft forever
> inet6 fe80::6cbd:8eff:fe8c:e945/64 scope link
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181217/14e5fba9/attachment-0001.html>
More information about the discuss
mailing list