[ovs-discuss] [ovs-dev] ovn-controller is taking 100% CPU all the time in one deployment

Han Zhou zhouhan at gmail.com
Fri Aug 30 20:35:15 UTC 2019


On Fri, Aug 30, 2019 at 1:25 PM Numan Siddique <nusiddiq at redhat.com> wrote:
>
> Hi Han,
>
> I am thinking of this approach to solve this problem. I still need to
test it.
> If you have any comments or concerns do let me know.
>
>
> **************************************************
> diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> index 9a2222282..a83b56362 100644
> --- a/northd/ovn-northd.c
> +++ b/northd/ovn-northd.c
> @@ -6552,6 +6552,41 @@ build_lrouter_flows(struct hmap *datapaths, struct
hmap *ports,
>
>          }
>
> +        /* Handle GARP reply packets received on a distributed router
gateway
> +         * port. GARP reply broadcast packets could be sent by external
> +         * switches. We don't want them to be handled by all the
> +         * ovn-controllers if they receive it. So add a priority-92 flow
to
> +         * apply the put_arp action on a redirect chassis and drop it on
> +         * other chassis.
> +         * Note that we are already adding a priority-90 logical flow in
the
> +         * table S_ROUTER_IN_IP_INPUT to apply the put_arp action if
> +         * arp.op == 2.
> +         * */
> +        if (op->od->l3dgw_port && op == op->od->l3dgw_port
> +                && op->od->l3redirect_port) {
> +            for (int i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> +                ds_clear(&match);
> +                ds_put_format(&match,
> +                              "inport == %s && is_chassis_resident(%s)
&& "
> +                              "eth.bcast && arp.op == 2 && arp.spa ==
%s/%u",
> +                              op->json_key,
op->od->l3redirect_port->json_key,
> +                              op->lrp_networks.ipv4_addrs[i].network_s,
> +                              op->lrp_networks.ipv4_addrs[i].plen);
> +                ovn_lflow_add(lflows, op->od, S_ROUTER_IN_IP_INPUT, 92,
> +                              ds_cstr(&match),
> +                              "put_arp(inport, arp.spa, arp.sha);");
> +                ds_clear(&match);
> +                ds_put_format(&match,
> +                              "inport == %s && !is_chassis_resident(%s)
&& "
> +                              "eth.bcast && arp.op == 2 && arp.spa ==
%s/%u",
> +                              op->json_key,
op->od->l3redirect_port->json_key,
> +                              op->lrp_networks.ipv4_addrs[i].network_s,
> +                              op->lrp_networks.ipv4_addrs[i].plen);
> +                ovn_lflow_add(lflows, op->od, S_ROUTER_IN_IP_INPUT, 92,
> +                              ds_cstr(&match), "drop;");
> +            }
> +        }
> +
>          /* A set to hold all load-balancer vips that need ARP responses.
*/
>          struct sset all_ips = SSET_INITIALIZER(&all_ips);
>          int addr_family;
> *************************************************
>
> If a physical switch sends GARP request packets we have existing logical
flows
> which handle them only on the gateway chassis.
>
> But if the physical switch sends GARP reply packets, then these packets
> are handled by ovn-controllers where bridge mappings are configured.
> I think its good enough if the gateway chassis handles these packet.
>
> In the deployment where we are seeing this issue, the physical switch
sends GARP reply
> packets.
>
> Thanks
> Numan
>
>
Hi Numan,

I think both GARP request and reply should be handled on all chassises. It
should work not only for physical switch, but also for virtual workloads.
At least our current use cases relies on that.

Thanks,
Han
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20190830/c0777548/attachment.html>


More information about the discuss mailing list