<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <br>
    <div class="moz-cite-prefix">On 11/28/2018 3:15 PM, Siva Teja ARETI
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAD6T32ZoB7Gnkqa_VRBzRQ_RXPNsJwxYZBJwm0fsDHPC9Ygn_Q@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div dir="ltr">Hi Greg,
          <div><br>
          </div>
          <div>Please find the answers inline below.</div>
          <br>
          <div class="gmail_quote">
            <div dir="ltr">On Tue, Nov 27, 2018 at 1:35 PM Gregory Rose
              &lt;<a href="mailto:gvrose8192@gmail.com"
                moz-do-not-send="true">gvrose8192@gmail.com</a>&gt;
              wrote:<br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF">
                <p>Siva,</p>
                <p>You have a routing issue.</p>
                See interalia<br>
                <a
                  class="gmail-m_-8070919725180450531moz-txt-link-freetext"
                  href="https://github.com/OpenNebula/one/issues/2161"
                  target="_blank" moz-do-not-send="true">https://github.com/OpenNebula/one/issues/2161</a><br>
                <a
                  class="gmail-m_-8070919725180450531moz-txt-link-freetext"
href="http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html"
                  target="_blank" moz-do-not-send="true">http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html</a><br>
                <a
                  class="gmail-m_-8070919725180450531moz-txt-link-freetext"
href="http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html"
                  target="_blank" moz-do-not-send="true">http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html</a><br>
                <br>
                For this to work you must be able to ping from the local
                IP to the remote IP *through* the remote IP address.As
                we have seen that doesn't work.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>Did you mean to be able to ping using remote interface?
              I am able to get this to work when I connect the two
              bridges using a veth pair.</div>
            <div><br>
            </div>
            <div>[root@vm1 ~]# ping 30.30.0.193 -I eth2</div>
            <div>PING 30.30.0.193 (30.30.0.193) from 20.20.0.183 eth2:
              56(84) bytes of data.</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=1
              ttl=64 time=0.655 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=2
              ttl=64 time=0.574 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=3
              ttl=64 time=0.600 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=4
              ttl=64 time=0.604 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=5
              ttl=64 time=0.607 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=6
              ttl=64 time=0.620 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=7
              ttl=64 time=0.466 ms</div>
            <div>64 bytes from <a href="http://30.30.0.193"
                moz-do-not-send="true">30.30.0.193</a>: icmp_seq=8
              ttl=64 time=0.623 ms</div>
            <div>^C</div>
            <div>--- 30.30.0.193 ping statistics ---</div>
            <div>8 packets transmitted, 8 received, 0% packet loss, time
              7000ms</div>
            <div>rtt min/avg/max/mdev = 0.466/0.593/0.655/0.059 ms</div>
            <div> </div>
            <div>Even with this routing setup, the local_ip option with
              vxlan tunnels does not seem to work and GRE tunnels work.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    So what you did there with the veth pair is not routing, it's
    bridging.<br>
    <br>
    <blockquote type="cite"
cite="mid:CAD6T32ZoB7Gnkqa_VRBzRQ_RXPNsJwxYZBJwm0fsDHPC9Ygn_Q@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">
          <div class="gmail_quote">
            <div><br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF"> As an aside, why do you have two
                bridges to the same VMs?  Your configuration makes it
                impossible to<br>
                set a route because  you have two sets of IP addresses
                and routes all on two bridges going into the same<br>
                VMs.  In that configuration the local ip option makes 
                no sense.  You don't need it - you're already bridged.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I was to trying to mimic a use case with two
              hypervisors and each hypervisor is connected to two
              different underlay networks. So, used linux bridges when
              imitated the topology with VMs. Please advice if this is
              not the right approach.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    I don't see how that can work - there does not seem to be enough
    isolation.  The VMs are still connected to<br>
    a single hypervisor and they're all bridged, not routed.<br>
    <br>
    <blockquote type="cite"
cite="mid:CAD6T32ZoB7Gnkqa_VRBzRQ_RXPNsJwxYZBJwm0fsDHPC9Ygn_Q@mail.gmail.com">
      <div dir="ltr">
        <div dir="ltr">
          <div class="gmail_quote">
            <div><br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF"> I understand that you have seen
                the gre configuration work and I'm not sure why because
                it has the same<br>
                requirements for the local ip to be routable through the
                remote ip.  And again, there is no point to the<br>
                local ip option because the ip addresses do not need to
                be routed to reach each other.<br>
                <br>
                In any case, I'm going to set up a valid configuration
                and then make sure that the local ip option does work<br>
                or not.  I'll report back when I'm done.<br>
                <br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I will look out for your conclusions.<br>
              <br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    So I have gotten both gre and vxlan to work with the local_ip
    option.<br>
    <br>
    Below is my setup for vxlan. The one for gre is identical except it
    is gre tunneling instead of vxlan tunneling.<br>
    I've highlighted in red notable configurations and IP addresses. 
    With this setup I can do this:<br>
    <br>
    From Machine B to Machine A:<br>
    <font size="-1" face="Courier New, Courier, monospace"># ip netns
      exec ns0 ping 10.1.1.1<br>
      PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.<br>
      64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.966 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=2 ttl=64 time=0.128 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=3 ttl=64 time=0.116 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=4 ttl=64 time=0.113 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=5 ttl=64 time=0.155 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=6 ttl=64 time=0.124 ms<br>
      64 bytes from 10.1.1.1: icmp_seq=7 ttl=64 time=0.133 ms<br>
    </font><br>
    As you can see the vxlan tunnel with local_ip option works fine when
    the base configuration is done<br>
    correctly.  I think a lot of confusion in this case has been between
    bridging and routing.  They are<br>
    really separate concepts.<br>
    <br>
    I hope this helps.<br>
    <br>
    Thanks,<br>
    <br>
    - Greg<br>
    <br>
    Setup follows:<br>
    <br>
    Machine A:<br>
    <font size="-1" face="Courier New, Courier, monospace"># ovs-vsctl
      show<br>
      e4490ab5-ba93-4291-8a4f-c6f71292310b<br>
          Bridge br-test<br>
      <b><font color="#ff0000">        Port "vxlan0"<br>
                      Interface "vxlan0"<br>
                          type: vxlan<br>
        </font></b><b><font color="#ff0000">                options:
          {key="100", local_ip="201.20.20.1", remote_ip="200.0.0.2"}</font></b><br>
              Port "p1"<br>
                  Interface "p1"<br>
              Port br-test<br>
                  Interface br-test<br>
                      type: internal<br>
          Bridge "br0"<br>
              Port "br0-peer"<br>
                  Interface "br0-peer"<br>
                      type: patch<br>
                      options: {peer="br1-peer"}<br>
              Port "em2"<br>
                  Interface "em2"<br>
              Port "br0"<br>
                  Interface "br0"<br>
                      type: internal<br>
          Bridge "br1"<br>
              Port "br1-peer"<br>
                  Interface "br1-peer"<br>
                      type: patch<br>
                      options: {peer="br0-peer"}<br>
              Port "br1"<br>
                  Interface "br1"<br>
                      type: internal<br>
              Port br-test-patch<br>
                  Interface br-test-patch<br>
                      type: patch<br>
                      options: {peer="br1-patch"}<br>
          ovs_version: "2.10.90"</font><br>
    <br>
    <font size="-1" face="Courier New, Courier, monospace"># ip addr
      show<br>
      5: em2: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq
      master ovs-system state UP group default qlen 1000<br>
          link/ether 24:6e:96:4a:f2:90 brd ff:ff:ff:ff:ff:ff<br>
      12: br0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc
      noqueue state UNKNOWN group default qlen 1000<br>
          link/ether 24:6e:96:4a:f2:90 brd ff:ff:ff:ff:ff:ff<br>
      <b><font color="#ff0000">    inet 201.20.20.1/24 scope global br0</font></b><br>
             valid_lft forever preferred_lft forever<br>
          inet6 fd01:1:3:1500:266e:96ff:fe4a:f290/64 scope global
      mngtmpaddr dynamic<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::266e:96ff:fe4a:f290/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      14: br1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc
      noqueue state UNKNOWN group default qlen 1000<br>
          link/ether 6a:f6:c5:75:3f:44 brd ff:ff:ff:ff:ff:ff<br>
          inet 201.20.20.9/24 scope global br1<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fd01:1:3:1500:68f6:c5ff:fe75:3f44/64 scope global
      mngtmpaddr dynamic<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::68f6:c5ff:fe75:3f44/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      18: p1@if19: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500
      qdisc noqueue master ovs-system state UP group default qlen 1000<br>
          link/ether c2:00:b3:6c:d4:08 brd ff:ff:ff:ff:ff:ff
      link-netnsid 0<br>
          inet6 fe80::c000:b3ff:fe6c:d408/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      23: br-test: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state
      DOWN group default qlen 1000<br>
          link/ether 9a:61:c4:03:30:46 brd ff:ff:ff:ff:ff:ff<br>
      25: vxlan_sys_4789: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu
      65470 qdisc noqueue master ovs-system state UNKNOWN group default
      qlen 1000<br>
          link/ether 2e:a5:e4:4c:38:0f brd ff:ff:ff:ff:ff:ff<br>
          inet6 fe80::2ca5:e4ff:fe4c:380f/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
    </font><br>
    <font size="-1" face="Courier New, Courier, monospace"># ip route
      show<br>
      default via 10.172.211.253 dev em1 proto dhcp metric 100<br>
      10.172.208.0/22 dev em1 proto kernel scope link src 10.172.208.214
      metric 100<br>
      192.168.122.0/24 dev virbr0 proto kernel scope link src
      192.168.122.1<br>
      <b><font color="#ff0000">200.0.0.0/24 via 201.20.20.1 dev br0</font></b><br>
      201.20.20.0/24 dev br0 proto kernel scope link src 201.20.20.1<br>
      201.20.20.0/24 dev br1 proto kernel scope link src 201.20.20.9<br>
    </font><br>
    <font size="-1" face="Courier New, Courier, monospace"># ip netns
      exec ns0 ip addr show<br>
      19: v1@if18: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500
      qdisc noqueue state UP group default qlen 1000<br>
          link/ether 16:14:b4:4e:06:8a brd ff:ff:ff:ff:ff:ff
      link-netnsid 0<br>
      <b><font color="#ff0000">    inet 10.1.1.1/24 scope global v1</font></b><br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::1414:b4ff:fe4e:68a/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
    </font><br>
    Machine B:<br>
    <font size="-1" face="Courier New, Courier, monospace"># ovs-vsctl
      show<br>
      021ce205-1cb1-441e-af92-f0316fe68f80<br>
          Bridge "br1"<br>
              Port "br1-peer"<br>
                  Interface "br1-peer"<br>
                      type: patch<br>
                      options: {peer="br0-peer"}<br>
              Port "br1"<br>
                  Interface "br1"<br>
                      type: internal<br>
              Port br-test-patch<br>
                  Interface br-test-patch<br>
                      type: patch<br>
                      options: {peer="br1-patch"}<br>
          Bridge "br0"<br>
              Port "em2"<br>
                  Interface "em2"<br>
              Port "br0-peer"<br>
                  Interface "br0-peer"<br>
                      type: patch<br>
                      options: {peer="br1-peer"}<br>
              Port "br0"<br>
                  Interface "br0"<br>
                      type: internal<br>
          Bridge br-test<br>
      <b><font color="#ff0000">        Port "vxlan0"<br>
                      Interface "vxlan0"<br>
                          type: vxlan<br>
                          options: {key="100", local_ip="200.0.0.2",
          remote_ip="201.20.20.1"}<br>
        </font></b>        Port br-test<br>
                  Interface br-test<br>
                      type: internal<br>
              Port "p1"<br>
                  Interface "p1"<br>
          ovs_version: "2.10.90"<br>
      <br>
      # ip addr show<br>
      5: em2: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq
      master ovs-system state UP group default qlen 1000<br>
          link/ether 24:6e:96:4a:ec:b8 brd ff:ff:ff:ff:ff:ff<br>
      12: br0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc
      noqueue state UNKNOWN group default qlen 1000<br>
          link/ether 24:6e:96:4a:ec:b8 brd ff:ff:ff:ff:ff:ff<br>
      <b><font color="#ff0000">    inet 200.0.0.2/24 scope global br0</font></b><br>
             valid_lft forever preferred_lft forever<br>
          inet6 fd01:1:3:1500:266e:96ff:fe4a:ecb8/64 scope global
      mngtmpaddr dynamic<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::266e:96ff:fe4a:ecb8/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      14: br1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc
      noqueue state UNKNOWN group default qlen 1000<br>
          link/ether 7a:fd:5c:43:fc:48 brd ff:ff:ff:ff:ff:ff<br>
          inet 200.0.0.9/24 scope global br1<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fd01:1:3:1500:78fd:5cff:fe43:fc48/64 scope global
      mngtmpaddr dynamic<br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::78fd:5cff:fe43:fc48/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      18: p1@if19: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500
      qdisc noqueue master ovs-system state UP group default qlen 1000<br>
          link/ether 92:c3:d0:65:82:0d brd ff:ff:ff:ff:ff:ff
      link-netnsid 0<br>
          inet6 fe80::90c3:d0ff:fe65:820d/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      23: br-test: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state
      DOWN group default qlen 1000<br>
          link/ether 5a:fc:3c:e9:1d:44 brd ff:ff:ff:ff:ff:ff<br>
      25: vxlan_sys_4789: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu
      65470 qdisc noqueue master ovs-system state UNKNOWN group default
      qlen 1000<br>
          link/ether de:dd:e8:9a:88:a3 brd ff:ff:ff:ff:ff:ff<br>
          inet6 fe80::dcdd:e8ff:fe9a:88a3/64 scope link<br>
             valid_lft forever preferred_lft forever<br>
      <br>
    </font><font size="-1" face="Courier New, Courier, monospace"># ip
      route show<br>
      default via 10.172.211.253 dev em1 proto dhcp metric 100<br>
      10.172.208.0/22 dev em1 proto kernel scope link src 10.172.208.215
      metric 100<br>
      192.168.122.0/24 dev virbr0 proto kernel scope link src
      192.168.122.1<br>
      200.0.0.0/24 dev br0 proto kernel scope link src 200.0.0.2<br>
      200.0.0.0/24 dev br1 proto kernel scope link src 200.0.0.9<br>
      <b><font color="#ff0000">201.20.20.0/24 via 200.0.0.2 dev br0</font></b><br>
    </font><br>
    <font size="-1" face="Courier New, Courier, monospace"># ip netns
      exec ns0 ip addr show<br>
      19: v1@if18: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500
      qdisc noqueue state UP group default qlen 1000<br>
          link/ether 6e:bd:8e:8c:e9:45 brd ff:ff:ff:ff:ff:ff
      link-netnsid 0<br>
      <b><font color="#ff0000">    inet 10.1.1.2/24 scope global v1</font></b><br>
             valid_lft forever preferred_lft forever<br>
          inet6 fe80::6cbd:8eff:fe8c:e945/64 scope link<br>
      <br>
    </font><br>
    <br>
    <br>
  </body>
</html>