[ovs-discuss] [OVN] egress ACLs on Port Groups seem broken

Daniel Alvarez Sanchez dalvarez at redhat.com
Tue Jun 19 20:37:24 UTC 2018


Sorry, the problem seems to be that this ACL is not added in the Port
Groups case for some reason (I checked wrong lflows log I had):

_uuid               : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
actions             : "reg0[0] = 1; next;"
external_ids        : {source="ovn-northd.c:2931", stage-name=ls_in_pre_acl}
logical_datapath    : 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
match               : ip
pipeline            : ingress
priority            : 100


Apparently, this code is not getting triggered for the Port Group case:
https://github.com/openvswitch/ovs/blob/master/ovn/northd/ovn-northd.c#L2930




On Tue, Jun 19, 2018 at 10:09 PM, Daniel Alvarez Sanchez <
dalvarez at redhat.com> wrote:

> Hi folks,
>
> Sorry for not being clear enough. In the tcpdump we can see the SYN
> packets being sent by port1 but retransmitted as it looks like the response
> to that SYN never reaches its destination. This is confirmed through the DP
> flows:
>
> $ sudo ovs-dpctl dump-flows
>
> recirc_id(0),in_port(3),eth(src=fa:16:3e:78:a2:cf,dst=fa:
> 16:3e:bf:6f:51),eth_type(0x0800),ipv4(src=10.0.0.6,dst=
> 168.0.0.0/252.0.0.0,proto=6,frag=no), packets:4, bytes:296, used:0.514s,
> flags:S, actions:4
>
> recirc_id(0),in_port(4),eth(src=fa:16:3e:bf:6f:51,dst=fa:
> 16:3e:78:a2:cf),eth_type(0x0800),ipv4(src=128.0.0.0/
> 128.0.0.0,dst=10.0.0.0/255.255.255.192,proto=6,frag=no),tcp(dst=32768/0x8000),
> packets:7, bytes:518, used:0.514s, flags:S., actions:drop
>
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=20,tcp,dl_src=fa:16:
> 3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.254.169.254,tcp_dst=80
> | ovn-detrace
>
> Flow: tcp,in_port=20,vlan_tci=0x0000,dl_src=fa:16:3e:78:a2:
> cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_sr
> c=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> ----------------
> 0. in_port=20, priority 100
> set_field:0x8->reg13
> set_field:0x5->reg11
> set_field:0x1->reg12
> set_field:0x1->metadata
> set_field:0x4->reg14
> resubmit(,8)
> 8. reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf, priority 50, cookie
> 0xe299b701
> resubmit(,9)
> 9. ip,reg14=0x4,metadata=0x1,dl_src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,
> priority 90, cookie 0x6581e351
> resubmit(,10)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=1 (ls_in_port_sec_ip), priority=90,
> match=(inport == "8ea9d963-7e55-49a6-8be7-cc294278180a" && eth.src ==
> fa:16:3e:78:a2:cf && i
> p4.src == {10.0.0.6}), actions=(next;)
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
>
> ...
>
> 47. metadata=0x1, priority 0, cookie 0xf35c5784
> resubmit(,48)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>         * Logical flow: table=7 (ls_out_stateful), priority=0, match=(1),
> actions=(next;)
> 48. metadata=0x1, priority 0, cookie 0x9546c56e
> resubmit(,49)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>         * Logical flow: table=8 (ls_out_port_sec_ip), priority=0,
> match=(1), actions=(next;)
> 49. reg15=0x1,metadata=0x1, priority 50, cookie 0x58af7841
> resubmit(,64)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>         * Logical flow: table=9 (ls_out_port_sec_l2), priority=50,
> match=(outport == "74db766c-2600-40f1-9ffa-255dc147d8a5),
> actions=(output;)
> 64. priority 0
> resubmit(,65)
> 65. reg15=0x1,metadata=0x1, priority 100
> output:21
>
> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x9,reg14=0x4,reg15=0x1,
> metadata=0x1,in_port=20,vlan_tci=0x0000,dl_src=fa:16:3e:78:
> a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x0000/0x1000,dl_
> src=fa:16:3e:78:a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=
> 168.0.0.0/6,nw_frag=no
> Datapath actions: 4
>
>
>
> At this point I would've expected the connection to be in conntrack (but
> if i'm not mistaken this is not supported in ovn-trace :?) so the return
> packet would be dropped:
>
> $ sudo ovs-appctl ofproto/trace br-int in_port=21,tcp,dl_dst=fa:16:
> 3e:78:a2:cf,dl_src=fa:16:3e:bf:6f:51,nw_dst=10.0.0.6,nw_src=169.254.169.254,tcp_dst=80
> | ovn-detrace
> Flow: tcp,in_port=21,vlan_tci=0x0000,dl_src=fa:16:3e:bf:6f:
> 51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_dst=
> 10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
>
> bridge("br-int")
> ----------------
> 0. in_port=21, priority 100
> set_field:0x9->reg13
> set_field:0x5->reg11
> set_field:0x1->reg12
> set_field:0x1->metadata
> set_field:0x1->reg14
> resubmit(,8)
> 8. reg14=0x1,metadata=0x1, priority 50, cookie 0x4017bca3
> resubmit(,9)
> 9. metadata=0x1, priority 0, cookie 0x5f2a07c6
> resubmit(,10)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=1 (ls_in_port_sec_ip), priority=0,
> match=(1), actions=(next;)
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
> ...
> 44. ip,reg15=0x4,metadata=0x1, priority 2001, cookie 0x3a87f6e9
> drop
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [egress]
>         * Logical flow: table=4 (ls_out_acl), priority=2001,
> match=(outport == @neutron_pg_drop && ip), actions=(/* drop */)
>
> Final flow: tcp,reg11=0x5,reg12=0x1,reg13=0x8,reg14=0x1,reg15=0x4,
> metadata=0x1,in_port=21,vlan_tci=0x0000,dl_src=fa:16:3e:bf:
> 6f:51,dl_dst=fa:16:3e:78:a2:cf,nw_src=169.254.169.254,nw_
> dst=10.0.0.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
> Megaflow: recirc_id=0,eth,tcp,in_port=21,vlan_tci=0x0000/0x1000,dl_
> src=fa:16:3e:bf:6f:51,dl_dst=fa:16:3e:78:a2:cf,nw_src=128.
> 0.0.0/1,nw_dst=10.0.0.0/26,nw_frag=no,tp_dst=0x40/0xffc0
> Datapath actions: drop
>
>
> After attempting a real connection I'm not seeing the connection in
> conntrack:
> $ sudo conntrack -L | grep "10.0.0.6"
> conntrack v1.4.4 (conntrack-tools): 134 flow entries have been shown.
>
> As for the working case I can see it when i do a curl to 169.254.169.254.
> $ sudo conntrack -L | grep "10.0.0.6"
> conntrack v1.4.4 (conntrack-tools): 73 flow entries have been shown.
> tcp      6 116 TIME_WAIT src=10.0.0.6 dst=169.254.169.254 sport=34566
> dport=80 src=169.254.169.254 dst=10.0.0.6 sport=80 dport=34566 [ASSURED]
> mark=0 zone=9 use=1
> tcp      6 116 TIME_WAIT src=10.0.0.6 dst=169.254.169.254 sport=34566
> dport=80 src=169.254.169.254 dst=10.0.0.6 sport=80 dport=34566 [ASSURED]
> mark=0 zone=8 use=1
>
>
> In the working case I can see the ct action:
>
> 13. ip,reg0=0x1/0x1,metadata=0x1, priority 100, cookie 0x181f32a1
> ct(table=14,zone=NXM_NX_REG13[0..15])
> drop
> -> A clone of the packet is forked to recirculate. The forked pipeline
> will be resumed at table 14.
> -> Sets the packet to an untracked state, and clears all the conntrack
> fields.
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=5 (ls_in_pre_stateful), priority=100,
> match=(reg0[0] == 1), actions=(ct_next;)
>
> Final flow: tcp,reg0=0x1,reg11=0x5,reg12=0x1,reg13=0x8,reg14=0x4,
> metadata=0x1,in_port=20,vlan_tci=0x0000,dl_src=fa:16:3e:78:
> a2:cf,dl_dst=fa:16:3e:bf:6f:51,nw_src=10.0.0.6,nw_dst=169.
> 254.169.254,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=80,tcp_flags=0
> Megaflow: recirc_id=0,eth,tcp,in_port=20,vlan_tci=0x0000/0x1000,dl_
> src=fa:16:3e:78:a2:cf,nw_src=10.0.0.6,nw_dst=168.0.0.0/6,
> nw_frag=no,tcp_flags=0
> Datapath actions: ct(zone=8),recirc(0x6a)
>
> ============================================================
> ===================
> recirc(0x6a) - resume conntrack with default ct_state=trk|new (use
> --ct-next to customize)
> ============================================================
> ===================
>
>
> Basically, I can spot the following differences:
>
> Non port groups:
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
> 11. ip,metadata=0x1, priority 100, cookie 0x5da9a3af
> load:0x1->NXM_NX_XXREG0[96]
> resubmit(,12)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=3 (ls_in_pre_acl), priority=100, match=(ip),
> actions=(reg0[0] = 1; next;)
> 12. metadata=0x1, priority 0, cookie 0x2383c6f0
> resubmit(,13)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=4 (ls_in_pre_lb), priority=0, match=(1),
> actions=(next;)
> 13. ip,reg0=0x1/0x1,metadata=0x1, priority 100, cookie 0x181f32a1
> ct(table=14,zone=NXM_NX_REG13[0..15])
> drop
> -> A clone of the packet is forked to recirculate. The forked pipeline
> will be resumed at table 14.
> -> Sets the packet to an untracked state, and clears all the conntrack
> fields.
>
>
> Port groups:
> 10. metadata=0x1, priority 0, cookie 0x1c3ddeef
> resubmit(,11)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=2 (ls_in_port_sec_nd), priority=0,
> match=(1), actions=(next;)
> 11. metadata=0x1, priority 0, cookie 0x145522b1
> resubmit(,12)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=3 (ls_in_pre_acl), priority=0, match=(1),
> actions=(next;)
> 12. metadata=0x1, priority 0, cookie 0x2383c6f0
> resubmit(,13)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=4 (ls_in_pre_lb), priority=0, match=(1),
> actions=(next;)
> 13. metadata=0x1, priority 0, cookie 0xa61e321f
> resubmit(,14)
>         * Logical datapath: "neutron-9d5615df-a7ba-4649-82f9-961a76fe6f64"
> (0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf) [ingress]
>         * Logical flow: table=5 (ls_in_pre_stateful), priority=0,
> match=(1), actions=(next;)
>
>
> OpenFlow:
>
> Non port groups:
>
>  cookie=0x5da9a3af, duration=798.461s, table=11, n_packets=59,
> n_bytes=9014, idle_age=132, priority=100,ip,metadata=0x1
> actions=load:0x1->NXM_NX_XXREG0[96],resubmit(,12)
>  cookie=0x5da9a3af, duration=798.461s, table=11, n_packets=0, n_bytes=0,
> idle_age=798, priority=100,ipv6,metadata=0x1 actions=load:0x1->NXM_NX_
> XXREG0[96],resubmit(,12)
>  cookie=0x145522b1, duration=234138.077s, table=11, n_packets=4687,
> n_bytes=455491, idle_age=135, hard_age=65534, priority=0,metadata=0x1
> actions=resubmit(,12)
>
>
> Port groups:
> cookie=0x145522b1, duration=234247.781s, table=11, n_packets=4746,
> n_bytes=461470, idle_age=0, hard_age=65534, priority=0,metadata=0x1
> actions=resubmit(,12)
>
>
> From ovn-northd man page:
>
> Ingress Table 3: from-lport Pre-ACLs
>
> This  table  prepares  flows for possible stateful ACL processing in
> ingress table ACLs. It contains a priority-0 flow that simply moves traffic
> to the next table. If stateful ACLs are used in the logical datapath, a
> priority 100 flow is added that sets a hint (with reg0[0] = 1; next;) for
> table  Pre-stateful  to  send  IP packets to the connection tracker before
> eventually advancing to ingress table ACLs. If special ports such as route
> ports or localnet ports can’t use ct(), a priority 110 flow is added to
> skip over stateful ACLs.
>
>
> So, for some reason, in both cases I see this Logical_Flow:
>
> _uuid               : 5a1bce6c-e4ed-4a1f-8150-cb855bbac037
> actions             : "reg0[0] = 1; next;"
> external_ids        : {source="ovn-northd.c:2931",
> stage-name=ls_in_pre_acl}
> logical_datapath    : 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
> match               : ip
> pipeline            : ingress
> priority            : 100
>
>
> Which apparently is responsible for adding the hint and putting the packet
> into conntrack but I can't see the physical flow in the Port Groups case.
>
> I'm still investigating but if the lflow is there it must be something
> with ovn-controller.
> Thanks,
>
> Daniel
>
>
>
> On Tue, Jun 19, 2018 at 1:07 AM, Han Zhou <zhouhan at gmail.com> wrote:
>
>> On Mon, Jun 18, 2018 at 1:43 PM, Daniel Alvarez Sanchez <
>> dalvarez at redhat.com> wrote:
>> >
>> > Hi all,
>> >
>> > I'm writing the code to implement the port groups in networking-ovn
>> (the OpenStack integration project with OVN). I found out that when a boot
>> a VM, looks like the egress traffic (from VM) is not working properly. The
>> VM port belongs to 3 Port Groups:
>> >
>> > 1. Default drop port group with the following ACLs:
>> >
>> > _uuid               : 0b092bb2-e97b-463b-a678-8a28085e3d68
>> > action              : drop
>> > direction           : from-lport
>> > external_ids        : {}
>> > log                 : false
>> > match               : "inport == @neutron_pg_drop && ip"
>> > name                : []
>> > priority            : 1001
>> > severity            : []
>> >
>> > _uuid               : 849ee2e0-f86e-4715-a949-cb5d93437847
>> > action              : drop
>> > direction           : to-lport
>> > external_ids        : {}
>> > log                 : false
>> > match               : "outport == @neutron_pg_drop && ip"
>> > name                : []
>> > priority            : 1001
>> > severity            : []
>> >
>> >
>> > 2. Subnet port group to allow DHCP traffic on that subnet:
>> >
>> > _uuid               : 8360a415-b7e1-412b-95ff-15cc95059ef0
>> > action              : allow
>> > direction           : from-lport
>> > external_ids        : {}
>> > log                 : false
>> > match               : "inport == @pg_b1a572c6_2331_4cfb_a892_3d9d7b0af70c
>> && ip4 && ip4.dst == {255.255.255.255, 10.0.0.0/26} && udp && udp.src ==
>> 68 && udp.dst == 67"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> >
>> > 3. Security group port group which the following rules:
>> >
>> > 3.1 Allow ICMP traffic:
>> >
>> > _uuid               : d12a749f-0f75-4634-aa20-6116e1d5d26d
>> > action              : allow-related
>> > direction           : to-lport
>> > external_ids        : {"neutron:security_group_rule_
>> id"="9675d6df-56a1-4640-9a0f-1f88e49ed2b5"}
>> > log                 : false
>> > match               : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip4 && ip4.src == 0.0.0.0/0 && icmp4"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> > 3.2 Allow SSH traffic:
>> >
>> > _uuid               : 05100729-816f-4a09-b15c-4759128019d4
>> > action              : allow-related
>> > direction           : to-lport
>> > external_ids        : {"neutron:security_group_rule_
>> id"="2a48979f-8209-4fb7-b24b-fff8d82a2ae9"}
>> > log                 : false
>> > match               : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 22"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> >
>> > 3.3 Allow IPv4/IPv6 traffic from this same port group
>> >
>> >
>> > _uuid               : b56ce66e-da6b-48be-a66e-77c8cfd6ab92
>> > action              : allow-related
>> > direction           : to-lport
>> > external_ids        : {"neutron:security_group_rule_
>> id"="5b0a47ee-8114-4b13-8d5b-b16d31586b3b"}
>> > log                 : false
>> > match               : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip6 && ip6.src == $pg_d237185f_733f_4a09_8832_bcee773722ef_ip6"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> >
>> > _uuid               : 7b68f430-41b5-414d-a2ed-6c548be53dce
>> > action              : allow-related
>> > direction           : to-lport
>> > external_ids        : {"neutron:security_group_rule_
>> id"="299bd9ca-89fb-4767-8ae9-a738e98603fb"}
>> > log                 : false
>> > match               : "outport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip4 && ip4.src == $pg_d237185f_733f_4a09_8832_bcee773722ef_ip4"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> >
>> > 3.4 Allow all egress (VM point of view) IPv4 traffic
>> >
>> > _uuid               : c5fbf0b7-6461-4f27-802e-b0d743be59e5
>> > action              : allow-related
>> > direction           : from-lport
>> > external_ids        : {"neutron:security_group_rule_
>> id"="a4ffe40a-f773-41d6-bc04-40500d158f51"}
>> > log                 : false
>> > match               : "inport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip4"
>> > name                : []
>> > priority            : 1002
>> > severity            : []
>> >
>> >
>> >
>> > So, I boot a VM using this port and I can verify that ICMP and SSH
>> traffic works good while the egress traffic doesn't work. From the VM I
>> curl to an IP living in a network namespace and this is what I see with
>> tcpdump there:
>> >
>> > On the VM:
>> > $ ip r get 169.254.254.169
>> > 169.254.254.169 via 10.0.0.1 dev eth0  src 10.0.0.6
>> > $ curl 169.254.169.254
>> >
>> > On the hypervisor (haproxy listening on 169.254.169.254:80):
>> >
>> > $ sudo ip net e ovnmeta-0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf tcpdump
>> -i any po
>> > rt 80 -vvn
>> > tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture
>> size 262144 bytes
>> > 21:59:47.106883 IP (tos 0x0, ttl 64, id 61543, offset 0, flags [DF],
>> proto TCP (6), length 60)
>> >     10.0.0.6.34553 > 169.254.169.254.http: Flags [S], cksum 0x851c
>> (correct), seq 2571046510, win 14020, options [mss 1402,sackOK,TS val
>> 22740490 ecr 0,nop,wscale 2], length 0
>> > 21:59:47.106935 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto
>> TCP (6), length 60)
>> >     169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
>> (incorrect -> 0x34c0), seq 3215869181, ack 2571046511, win 28960, options
>> [mss 1460,sackOK,TS val 200017176 ecr 22740490,nop,wscale 7], length 0
>> > 21:59:48.105256 IP (tos 0x0, ttl 64, id 61544, offset 0, flags [DF],
>> proto TCP (6), length 60)
>> >     10.0.0.6.34553 > 169.254.169.254.http: Flags [S], cksum 0x5e31
>> (incorrect -> 0x8422), seq 2571046510, win 14020, options [mss
>> 1402,sackOK,TS val 22740740 ecr 0,nop,wscale 2], length 0
>> > 21:59:48.105315 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto
>> TCP (6), length 60)
>> >     169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
>> (incorrect -> 0x30da), seq 3215869181, ack 2571046511, win 28960, options
>> [mss 1460,sackOK,TS val 200018174 ecr 22740490,nop,wscale 7], length 0
>> > 21:59:49.526158 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto
>> TCP (6), length 60)
>> >     169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
>> (incorrect -> 0x2b4d), seq 3215869181, ack 2571046511, win 28960, options
>> [mss 1460,sackOK,TS val 200019595 ecr 22740490,nop,wscale 7], length 0
>> > 21:59:50.109732 IP (tos 0x0, ttl 64, id 61545, offset 0, flags [DF],
>> proto TCP (6), length 60)
>> >     10.0.0.6.34553 > 169.254.169.254.http: Flags [S], cksum 0x5e31
>> (incorrect -> 0x822d), seq 2571046510, win 14020, options [mss
>> 1402,sackOK,TS val 22741241 ecr 0,nop,wscale 2], length 0
>> > 21:59:50.109795 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto
>> TCP (6), length 60)
>> >     169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
>> (incorrect -> 0x2906), seq 3215869181, ack 2571046511, win 28960, options
>> [mss 1460,sackOK,TS val 200020178 ecr 22740490,nop,wscale 7], length 0
>> > 21:59:52.146800 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto
>> TCP (6), length 60)
>> >     169.254.169.254.http > 10.0.0.6.34553: Flags [S.], cksum 0x5e31
>> (incorrect -> 0x2110), seq 3215869181, ack 2571046511, win 28960, options
>> [mss 1460,sackOK,TS val 200022216 ecr 22740490,nop,wscale 7], length 0
>> >
>> >
>> > Logical flows table in SB database:
>> >
>> > _uuid               : 1797e859-8c8e-4ad5-8e83-bd5f3be6da24
>> > actions             : "next;"
>> > external_ids        : {source="ovn-northd.c:3186",
>> stage-hint="c5fbf0b7", stage-name=ls_in_acl}
>> > logical_datapath    : 0cf12eb0-fdb3-4087-98b0-9c52cafd0bdf
>> > match               : "inport == @pg_d237185f_733f_4a09_8832_bcee773722ef
>> && ip4"
>> > pipeline            : ingress
>> > priority            : 2002
>> > table_id            : 6
>> > hash                : 0
>> >
>> >
>> > ovn-sbctl lflow-list
>> >
>> >   table=6 (ls_in_acl          ), priority=2002 , match=(inport ==
>> @pg_b1a572c6_2331_4cfb_a892_3d9d7b0af70c && ip4 && ip4.dst ==
>> {255.255.255.255, 10.0.0.0/26} && udp && udp.src == 68 && udp.dst ==
>> 67), action=(next;)
>> >   table=6 (ls_in_acl          ), priority=2002 , match=(inport ==
>> @pg_d237185f_733f_4a09_8832_bcee773722ef && ip4), action=(next;)
>> >   table=6 (ls_in_acl          ), priority=2001 , match=(inport ==
>> @neutron_pg_drop && ip), action=(/* drop */)
>> >
>> >
>> > These are the OpenFlow rules installed in table 14:
>> >
>> >  cookie=0x0, duration=19223.716s, table=14, n_packets=0, n_bytes=0,
>> idle_age=19223, priority=2002,udp,reg14=0x4,metadata=0x1,tp_src=68,tp_dst=67
>> actions=conju
>> > nction(2,1/2)
>> >  cookie=0x0, duration=19223.716s, table=14, n_packets=0, n_bytes=0,
>> idle_age=19223, priority=2002,udp,metadata=0x1
>> ,nw_dst=255.255.255.255,tp_src=68,tp_dst=67
>> > actions=conjunction(2,2/2)
>> >  cookie=0xd41e70c, duration=19223.844s, table=14, n_packets=0,
>> n_bytes=0, idle_age=19223, priority=2001,ipv6,reg14=0x4,metadata=0x1
>> actions=drop
>> >  cookie=0xd41e70c, duration=19223.844s, table=14, n_packets=0,
>> n_bytes=0, idle_age=19223, priority=2001,ip,reg14=0x4,metadata=0x1
>> actions=drop
>> >
>> >
>> > @Han do you have any pointers as to what this could be failing?
>> > Something you want me to check in this setup?
>> >
>> >
>> Hi Daniel,
>>
>> Sorry that I didn't see any failure from the tcpdump.
>> I see traffic in both directions. The VM 10.0.0.6 is sending request to
>> 169.254.169.254:80, and it is responded. So what's getting dropped?
>> If you suspect any packet drops, as mentioned by Ben you can do
>> ovn-trace. And you can also do ovs-dpctl dump-flows | grep drop, and then
>> use the dumped flow to do ovs-appctl trace "<the dp flow>" | ovn-detrace.
>> Finally, if you suspect a specific ACL is dropping the packets, you can
>> re-add that ACL manually with --log option, and you should be able to see
>> the packets being dropped in the ovn-controller.log.
>>
>> Thanks,
>> Han
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20180619/2d50363d/attachment-0001.html>


More information about the discuss mailing list