[ovs-dev] [PATCH RFC v2 0/8] Introduce connection tracking tc offload

Paul Blakey paulb at mellanox.com
Wed Jul 24 15:44:27 UTC 2019


On 7/19/2019 12:59 AM, Marcelo Ricardo Leitner wrote:
>
> Hi Paul,
>
> Sometimes I'm seeing ghosts^Wbogus values with nat:
>
> [root at localhost ~]# ovs-ofctl dump-flows br0
>   cookie=0x0, duration=788.595s, table=0, n_packets=3, n_bytes=180, priority=50,ct_state=-trk,tcp,in_port="ns2-veth-ab" actions=ct(table=0,zone=2)
>   cookie=0x0, duration=788.584s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=-trk,tcp,in_port="ns1-veth-ab" actions=ct(table=0,zone=2)
>   cookie=0x0, duration=788.589s, table=0, n_packets=3, n_bytes=180, priority=50,ct_state=+new+trk,tcp,in_port="ns2-veth-ab" actions=ct(commit,zone=2,nat(src=192.168.0.30)),output:"ns1-veth-ab"
>   cookie=0x0, duration=788.551s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=+est+trk,tcp,in_port="ns1-veth-ab" actions=ct(zone=2,nat),output:"ns2-veth-ab"
>   cookie=0x0, duration=788.546s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=+est+trk,tcp,in_port="ns2-veth-ab" actions=ct(zone=2,nat),output:"ns1-veth-ab"
>   cookie=0x0, duration=788.531s, table=0, n_packets=22, n_bytes=1672, priority=10 actions=NORMAL
>
> [root at localhost ~]# cat /proc/net/nf_conntrack
> ipv4     2 tcp      6 26 SYN_SENT src=192.168.0.2 dst=192.168.0.1 sport=41524 dport=5001 [UNREPLIED] src=192.168.0.1 dst=233.185.30.138 sport=5001 dport=41524 mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=2 use=2
>
> Note the 'dst=233.185.30.138' where it should have been 192.168.0.30.
> Interesting that it is always this address.
>
> Here, it worked:
> ipv4     2 tcp      6 58 CLOSE_WAIT src=192.168.0.2 dst=192.168.0.1 sport=41616 dport=5001 src=192.168.0.1 dst=192.168.0.30 sport=5001 dport=41616 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=2 use=2
>
> I cannot reproduce this with direct tc tests and neither with
> hw_offload=false.
>
> I'm using:
> kernel: 192f0f8e9db7efe4ac98d47f5fa4334e43c1204d + recird_id patches
> iproute2: 0f48f9f46ae83c042d36c1208b0f79966f92a951 + act_ct patches
> ovs: c99d14775f78cb38b2109add063f58201ba07652 + this series (including the fixup)


Hi marcelo,

Thanks for the test, I reproduce it on my end.

The bug is that we supply max range that is 0.0.0.0 to act ct if a max 
range isn't specified in the openflow.

Then we get min range  192.168.0.30 and max 0.0.0.0. Act ct uses this 
0.0.0.0 as max

and the ip you see is the result of some internal conntrack nat logic, 
that is dependent on the previous test.


This can be fixed in this RFC as below or in act_ct to interpret 0.0.0.0 
for max the same as not specified and use min instead.


Try the below fix,

Thanks,

Paul.

-----

diff --git a/lib/tc.c b/lib/tc.c
index eacfe4a..c2658d1 100644
--- a/lib/tc.c
+++ b/lib/tc.c
@@ -1948,13 +1948,17 @@ nl_msg_put_act_ct(struct ofpbuf *request, struct 
tc_action *action)
                  if (action->ct.range.ip_family == AF_INET) {
                      nl_msg_put_be32(request, TCA_CT_NAT_IPV4_MIN,
action->ct.range.min_addr.ipv4);
-                    nl_msg_put_be32(request, TCA_CT_NAT_IPV4_MAX,
+                    if (action->ct.range.max_addr.ipv4)
+                        nl_msg_put_be32(request, TCA_CT_NAT_IPV4_MAX,
action->ct.range.max_addr.ipv4);
                  } else if (action->ct.range.ip_family == AF_INET6) {
                      nl_msg_put_in6_addr(request, TCA_CT_NAT_IPV6_MIN,
&action->ct.range.min_addr.ipv6);
-                    nl_msg_put_in6_addr(request, TCA_CT_NAT_IPV6_MAX,
- &action->ct.range.max_addr.ipv6);
+                    if (!is_all_zeros(&action->ct.range.max_addr.ipv6,
+ sizeof(action->ct.range.max_addr.ipv6))) {
+                        nl_msg_put_in6_addr(request, TCA_CT_NAT_IPV6_MAX,
+ &action->ct.range.max_addr.ipv6);
+                    }
                  }

                  if (action->ct.range.min_port) {




More information about the dev mailing list