[ovs-discuss] [ovn] Is ovn-interconnection gateway required to be a dedicated node?

Vladislav Odintsov odivlad at gmail.com
Tue Jul 13 18:07:37 UTC 2021


Hi,

maybe somebody from dev list can take a look and knows an answer?
Thanks in advance.

Regards,
Vladislav Odintsov

> On 7 Jul 2021, at 19:32, Vladislav Odintsov <odivlad at gmail.com> wrote:
> 
> Hi all,
> 
> I’ve tried to setup OVN interconnection with only two ovn-controller nodes (each in separate AZ) and failed. Both nodes configured as is-interconn=“true”. See chassis output below:
> 
> # ovn-sbctl list chassis
> _uuid               : a760ba28-e432-4c0a-93d1-51ae00a0cbb5
> encaps              : [2bc5e889-e7b7-466a-ae74-0c153e018965, 2ea0c375-ae7e-4ee6-a758-c006e54c3706]
> external_ids        : {datapath-type="", iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-monitor-all="false"}
> hostname            : dev2.local
> name                : dev2
> nb_cfg              : 0
> other_config        : {datapath-type="", iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-monitor-all="false"}
> transport_zones     : []
> vtep_logical_switches: []
> 
> _uuid               : 8995190d-e8a0-44af-a256-5a0762c4e1ab
> encaps              : [35c4459f-9888-49b2-96b8-cc496d293e83, 895e3c6e-81b9-4aca-92a0-d0c6df07c4a5]
> external_ids        : {is-remote="true"}
> hostname            : dev.local
> name                : dev1
> nb_cfg              : 0
> other_config        : {is-remote="true"}
> transport_zones     : []
> vtep_logical_switches: []
> 
> 
> Then I’ve created logical topology same as mentioned in ovn-ic tutorial (https://docs.ovn.org/en/latest/tutorials/ovn-interconnection.html <https://docs.ovn.org/en/latest/tutorials/ovn-interconnection.html>), I’ve enabled routes advertisement and learning.
> 
> Now, when I run ping from VM in AZ1 to VM in AZ2, I see drop in ovs-dpctl:
> 
> # ovs-dpctl dump-flows | grep drop
> recirc_id(0),tunnel(tun_id=0x10002ff0002,src=192.168.0.13,dst=192.168.0.7,flags(-df+csum+key)),in_port(5),ct_state(-new-est-rel-rpl-inv-trk),ct_label(0/0x1),eth(src=0a:00:3b:ef:7e:e1,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(frag=no), packets:8712, bytes:853776, used:0.705s, actions:drop
> 
> ovn-detrace for this flow:
> 
> ovs-appctl ofproto/trace "recirc_id(0),tunnel(ttl=64,tun_id=0x10002ff0002,src=192.168.0.13,dst=192.168.0.7,flags(-df+csum+key)),in_port(5),ct_state(-new-est-rel-rpl-inv-trk),ct_label(0/0x1),eth(src=0a:00:3b:ef:7e:e1,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(frag=no)" | ovn-detrace
> Flow: ip,tun_id=0x10002ff0002,tun_src=192.168.0.13,tun_dst=192.168.0.7,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,tun_flags=csum|key,in_port=1,vlan_tci=0x0000,dl_src=0a:00:3b:ef:7e:e1,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=0.0.0.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> 
> bridge("br-int")
> ----------------
> 0. in_port=1, priority 100
> move:NXM_NX_TUN_ID[40..54]->NXM_NX_REG14[0..14]
> -> NXM_NX_REG14[0..14] is now 0x1
> move:NXM_NX_TUN_ID[24..39]->NXM_NX_REG15[0..15]
> -> NXM_NX_REG15[0..15] is now 0x2
> move:NXM_NX_TUN_ID[0..23]->OXM_OF_METADATA[0..23]
> -> OXM_OF_METADATA[0..23] is now 0xff0002
> resubmit(,33)
> 33. reg15=0x2,metadata=0xff0002, priority 100
> set_field:0xe->reg11
> set_field:0xd->reg12
> resubmit(,34)
> 34. priority 0
> set_field:0->reg0
> set_field:0->reg1
> set_field:0->reg2
> set_field:0->reg3
> set_field:0->reg4
> set_field:0->reg5
> set_field:0->reg6
> set_field:0->reg7
> set_field:0->reg8
> set_field:0->reg9
> resubmit(,40)
> 40. metadata=0xff0002, priority 0, cookie 0xe26da94c
> resubmit(,41)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=0 (ls_out_pre_lb), priority=0, match=(1), actions=(next;)
> 41. metadata=0xff0002, priority 0, cookie 0xe995fbea
> resubmit(,42)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=1 (ls_out_pre_acl), priority=0, match=(1), actions=(next;)
> 42. metadata=0xff0002, priority 0, cookie 0x4b593f2c
> resubmit(,43)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=2 (ls_out_pre_stateful), priority=0, match=(1), actions=(next;)
> 43. metadata=0xff0002, priority 0, cookie 0xae9416df
> resubmit(,44)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=3 (ls_out_lb), priority=0, match=(1), actions=(next;)
> 44. metadata=0xff0002, priority 0, cookie 0x7d3b6a9
> resubmit(,45)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=4 (ls_out_acl), priority=0, match=(1), actions=(next;)
> 45. metadata=0xff0002, priority 0, cookie 0xf9cf4d3a
> resubmit(,46)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=5 (ls_out_qos_mark), priority=0, match=(1), actions=(next;)
> 46. metadata=0xff0002, priority 0, cookie 0x659b144
> resubmit(,47)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=6 (ls_out_qos_meter), priority=0, match=(1), actions=(next;)
> 47. metadata=0xff0002, priority 0, cookie 0x8379d99a
> resubmit(,48)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=7 (ls_out_stateful), priority=0, match=(1), actions=(next;)
> 48. metadata=0xff0002, priority 0, cookie 0x39bc111f
> resubmit(,49)
>   *  Logical datapath: "vpc-DDB30485-global" (b73400bb-28e2-4335-a7cf-dc35a4879841) [egress]
>   *  Logical flow: table=8 (ls_out_port_sec_ip), priority=0, match=(1), actions=(next;)
> 49. No match.
> drop
> 
> Final flow: ip,reg11=0xe,reg12=0xd,reg14=0x1,reg15=0x2,tun_id=0x10002ff0002,tun_src=192.168.0.13,tun_dst=192.168.0.7,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,tun_flags=csum|key,metadata=0xff0002,in_port=1,vlan_tci=0x0000,dl_src=0a:00:3b:ef:7e:e1,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=0.0.0.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> Megaflow: recirc_id=0,ct_state=-new-est-rel-rpl-inv-trk,ct_label=0/0x1,eth,ip,tun_id=0x10002ff0002,tun_src=192.168.0.13,tun_dst=192.168.0.7,tun_tos=0,tun_flags=-df+csum+key,in_port=1,dl_src=0a:00:3b:ef:7e:e1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00,nw_frag=no
> Datapath actions: drop
> 
> Is it possible somehow to run ovn-ic gw role on the same node with HV?
> If yes, what could I do wrong in this setup?
> 
> ovn-ic-sbctl show output:
> 
> # ovn-ic-sbctl --no-leader show
> availability-zone az1
>     gateway dev1
>         hostname: dev.local
>         type: vxlan
>             ip: 192.168.0.13
>         type: stt
>             ip: 192.168.0.13
>         port vpc-DDB30485-rtb-3BEF7EE1-az1
>             transit switch: vpc-DDB30485-global
>             address: ["0a:00:3b:ef:7e:e1 169.254.8.2/22"]
> availability-zone az2
>     gateway dev2
>         hostname: dev2.local
>         type: vxlan
>             ip: 192.168.0.7
>         type: stt
>             ip: 192.168.0.7
>         port vpc-DDB30485-rtb-3BEF7EE1-az2
>             transit switch: vpc-DDB30485-global
>             address: ["0a:01:3b:ef:7e:e1 169.254.8.100/22”]
> 
> 
> Thanks for help in advance.
> 
> Regards,
> Vladislav Odintsov
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20210713/7f45d3ac/attachment-0001.html>


More information about the discuss mailing list