[ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

Gregory Rose gvrose8192 at gmail.com
Tue Nov 20 17:25:41 UTC 2018


On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:
>
>
> On Mon, Nov 19, 2018 at 7:17 PM Gregory Rose <gvrose8192 at gmail.com 
> <mailto:gvrose8192 at gmail.com>> wrote:
>
>
>     Hi Siva,
>
>     One more request  - I need to see the underlying network
>     configuration
>     of the hypervisor running the two VMs.
>     Are both VMs on the same machine?  If so then just the network
>     configuration of the base machine
>     running the VMs, otherwise the network configuration of each base
>     machine running their perspective
>     VM.
>
>     This is turning into quite the investigation and I apologize that
>     it is
>     taking so long.  Please bear with me
>     if you can and we'll see if we can't get this problem solved. 
>     I've seen
>     some puzzling bugs before and
>     this one is turning out to be one of the best.  Or worst....
>     depends on
>     your outlook.  :)
>
>     Thanks for all your help so far!
>
>     - Greg
>
>
> Hi Greg,
>
> Both the VMs run on same hypervisor in my setup. Created VMs and 
> virtual networks using virsh commands. Virsh XMLs for networks look 
> like below
>
> [user at hyp1 ] virsh net-dumpxml route1
> <network connections='2'>
>   <name>route1</name>
> <uuid>2c935aaf-ebde-5b76-a903-4fccb115ff75</uuid>
>   <forward mode='route'/>
>   <bridge name='testbr1' stp='on' delay='0'/>
>   <mac address='42:54:00:84:4e:04'/>
>   <ip address='20.20.0.1' netmask='255.255.255.0'>
>     <dhcp>
>       <range start='20.20.0.2' end='20.20.0.254'/>
>     </dhcp>
>   </ip>
> </network>
>
> [user at hyp1 ]  network virsh net-dumpxml route2
> <network connections='2'>
>   <name>route2</name>
> <uuid>2c935baf-ebde-5b76-a903-4fccb115ff75</uuid>
>   <forward mode='route'/>
>   <bridge name='testbr2' stp='on' delay='0'/>
>   <mac address='42:54:10:84:4e:04'/>
>   <ip address='30.30.0.1' netmask='255.255.255.0'>
>     <dhcp>
>       <range start='30.30.0.2' end='30.30.0.254'/>
>     </dhcp>
>   </ip>
> </network>
>
> Each VM is connected to both the networks.
>
> Some network configuration of the hypervisor.
>
> [user at hyp-1] ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
> group default qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast 
> state UP group default qlen 1000
>     link/ether <mac> brd ff:ff:ff:ff:ff:ff
>     inet A.B.C.D/24 brd X.Y.Z.W scope global dynamic enp5s0
>        valid_lft 318349sec preferred_lft 318349sec
> 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
> state UP group default qlen 1000
>     link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.122.1/24 <http://192.168.122.1/24> brd 
> 192.168.122.255 scope global virbr0
>        valid_lft forever preferred_lft forever
> 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state 
> DOWN group default qlen 1000
>     link/ether 52:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
> 11: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
> state UP group default
>     link/ether 02:42:89:28:db:a5 brd ff:ff:ff:ff:ff:ff
>     inet 172.17.0.1/16 <http://172.17.0.1/16> scope global docker0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::42:89ff:fe28:dba5/64 scope link
>        valid_lft forever preferred_lft forever
> 96: vboxnet0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc 
> pfifo_fast state DOWN group default qlen 1000
>     link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.99.1/24 <http://192.168.99.1/24> brd 192.168.99.255 
> scope global vboxnet0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::800:27ff:fe00:0/64 scope link
>        valid_lft forever preferred_lft forever
> 193: testbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc 
> noqueue state DOWN group default qlen 1000
>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>     inet 10.10.0.1/24 <http://10.10.0.1/24> brd 10.10.0.255 scope 
> global testbr0
>        valid_lft forever preferred_lft forever
> 194: testbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast 
> state DOWN group default qlen 1000
>     link/ether 42:54:00:94:4e:04 brd ff:ff:ff:ff:ff:ff
> 227: testbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
> state UP group default qlen 1000
>     link/ether fe:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
>     inet 20.20.0.1/24 <http://20.20.0.1/24> brd 20.20.0.255 scope 
> global testbr1
>        valid_lft forever preferred_lft forever
> 228: testbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast 
> state DOWN group default qlen 1000
>     link/ether 42:54:00:84:4e:04 brd ff:ff:ff:ff:ff:ff
> 229: testbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
> state UP group default qlen 1000
>     link/ether fe:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
>     inet 30.30.0.1/24 <http://30.30.0.1/24> brd 30.30.0.255 scope 
> global testbr2
>        valid_lft forever preferred_lft forever
> 230: testbr2-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast 
> state DOWN group default qlen 1000
>     link/ether 42:54:10:84:4e:04 brd ff:ff:ff:ff:ff:ff
> 231: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master virbr0 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:0a:d3:70 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:fe0a:d370/64 scope link
>        valid_lft forever preferred_lft forever
> 232: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master testbr2 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:b8:05:be brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:feb8:5be/64 scope link
>        valid_lft forever preferred_lft forever
> 233: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master testbr1 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:f0:64:37 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:fef0:6437/64 scope link
>        valid_lft forever preferred_lft forever
> 234: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master virbr0 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:56:cb:89 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:fe56:cb89/64 scope link
>        valid_lft forever preferred_lft forever
> 235: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master testbr2 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:fe79:ef92/64 scope link
>        valid_lft forever preferred_lft forever
> 236: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
> pfifo_fast master testbr1 state UNKNOWN group default qlen 1000
>     link/ether fe:54:00:05:93:7c brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc54:ff:fe05:937c/64 scope link
>        valid_lft forever preferred_lft forever
> [user at hyp-1] ip route
> default via A.B.C.D dev enp5s0  proto static  metric 100
> 10.10.0.0/24 <http://10.10.0.0/24> dev testbr0  proto kernel  scope 
> link  src 10.10.0.1 linkdown
> 20.20.0.0/24 <http://20.20.0.0/24> dev testbr1  proto kernel  scope 
> link  src 20.20.0.1
> 30.30.0.0/24 <http://30.30.0.0/24> dev testbr2  proto kernel  scope 
> link  src 30.30.0.1
> A.B.C.D via P.Q.R.S dev enp5s0  proto dhcp  metric 100
> X.Y.Z.W dev enp5s0  proto kernel  scope link  src A.B.C.D  metric 100
> 172.17.0.0/16 <http://172.17.0.0/16> dev docker0  proto kernel  scope 
> link  src 172.17.0.1
> 192.168.99.0/24 <http://192.168.99.0/24> dev vboxnet0  proto kernel  
> scope link  src 192.168.99.1 linkdown
> 192.168.122.0/24 <http://192.168.122.0/24> dev virbr0 proto kernel  
> scope link  src 192.168.122.1
>
> I am not completely sure what you meant by network configuration. 
> Kindly let me know if you are looking for something more specific.
>
> There is one strange behavior that I observed on the VMs. I am able to 
> ping across networks if I use IP address but it does not work if I use 
> interface name directly.
>
> [root at vm2 ~]# ip addr  show eth1
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>     link/ether 52:54:00:79:ef:92 brd ff:ff:ff:ff:ff:ff
>     inet 30.30.0.193/24 <http://30.30.0.193/24> brd 30.30.0.255 scope 
> global dynamic eth1
>        valid_lft 2728sec preferred_lft 2728sec
>     inet6 fe80::5054:ff:fe79:ef92/64 scope link
>        valid_lft forever preferred_lft forever
> [root at vm2 ~]# ping 20.20.0.183 -I eth1
> PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 eth1: 56(84) bytes of 
> data.
> ^C
> --- 20.20.0.183 ping statistics ---
> 7 packets transmitted, 0 received, 100% packet loss, time 6000ms
>
> [root at vm2 ~]# ping 20.20.0.183 -I 30.30.0.193
> PING 20.20.0.183 (20.20.0.183) from 30.30.0.193 : 56(84) bytes of data.
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=1 ttl=64 
> time=0.766 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=2 ttl=64 
> time=0.561 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=3 ttl=64 
> time=0.605 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=4 ttl=64 
> time=0.537 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=5 ttl=64 
> time=0.607 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=6 ttl=64 
> time=0.618 ms
> 64 bytes from 20.20.0.183 <http://20.20.0.183>: icmp_seq=7 ttl=64 
> time=0.624 ms
> ^C
> --- 20.20.0.183 ping statistics ---
> 7 packets transmitted, 7 received, 0% packet loss, time 6000ms
> rtt min/avg/max/mdev = 0.537/0.616/0.766/0.075 ms
> [root at vm2 ~]#
>
> I don't the reason behind this and I will need to understand this when 
> I get some time. Just letting you know if it might make any difference 
> in your setup.
>
> Siva Teja.

Siva,

Thanks for that information - it will help.  I'll have to spend some 
time analyzing it, this is a complex setup.

As for the ping - that is to be expected.  The IP address belongs to the 
system so any IP address on the system
can be pinged.  That's why I mentioned adding the '-I' option to specify 
the interface, that way you force the
ping through a specific interface and that will help understand the 
routing setup on a machine.

Thanks,

- Greg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20181120/b17969ba/attachment-0001.html>


More information about the discuss mailing list