[ovs-discuss] How to restart raft cluster after a complete shutdown?

Matthew Booth mbooth at redhat.com
Wed Aug 26 08:21:36 UTC 2020


On Tue, 25 Aug 2020 at 17:45, Tony Liu <tonyliu0592 at hotmail.com> wrote:
>
> Start the first node to create the cluster.
> https://github.com/ovn-org/ovn/blob/master/utilities/ovn-ctl#L228
> https://github.com/openvswitch/ovs/blob/master/utilities/ovs-lib.in#L478
>
> Start the rest nodes to join the cluster.
> https://github.com/ovn-org/ovn/blob/master/utilities/ovn-ctl#L226
> https://github.com/openvswitch/ovs/blob/master/utilities/ovs-lib.in#L478

Unfortunately this is precisely the problem: this doesn't work after
the cluster has already been created. The first node fails to come up
with:

2020-08-26T08:06:19Z|00003|reconnect|INFO|tcp:ovn-ovsdb-1.openstack.svc.cluster.local:6643:
connecting...
2020-08-26T08:06:19Z|00004|reconnect|INFO|tcp:ovn-ovsdb-2.openstack.svc.cluster.local:6643:
connecting...
2020-08-26T08:06:20Z|00005|reconnect|INFO|tcp:ovn-ovsdb-1.openstack.svc.cluster.local:6643:
connection attempt timed out
2020-08-26T08:06:20Z|00006|reconnect|INFO|tcp:ovn-ovsdb-2.openstack.svc.cluster.local:6643:
connection attempt timed out

This makes sense, because the first node can't come up without joining
a quorum, and it can't join a quorum because the other two nodes
aren't up.

I 'fixed' this by switching from the OrderedReady to Parallel pod
management policy for the statefulset. This just means that all pods
come up simultaneously rather than waiting for the first to come up on
its own, which will never work. However, my bootstrapping mechanism
relied on the behaviour of OrderedReady, so I'm going to have to come
up with a solution for that.

Matt

>
> Tony
> > -----Original Message-----
> > From: discuss <ovs-discuss-bounces at openvswitch.org> On Behalf Of Matthew
> > Booth
> > Sent: Tuesday, August 25, 2020 7:08 AM
> > To: ovs-discuss <ovs-discuss at openvswitch.org>
> > Subject: [ovs-discuss] How to restart raft cluster after a complete
> > shutdown?
> >
> > I'm deploying ovsdb-server (and only ovsdb-server) in K8S as a
> > StatefulSet:
> >
> > https://github.com/openstack-k8s-operators/dev-
> > tools/blob/master/ansible/files/ocp/ovn/ovsdb.yaml
> >
> > I'm going to replace this with an operator in due course, which may make
> > the following simpler. I'm not necessarily constrained to only things
> > which are easy to do in a StatefulSet.
> >
> > I've noticed an issue when I kill all 3 pods simultaneously: it is no
> > longer possible to start the cluster. The issue is presumably one of
> > quorum: when a node comes up it can't contact any other node to make
> > quorum, and therefore can't come up. All nodes are similarly affected,
> > so the cluster stays down. Ignoring kubernetes, how is this situation
> > intended to be handled? Do I have to it to a single-node deployment,
> > convert that to a new cluster and re-bootstrap it? This wouldn't be
> > ideal. Is there any way, for example, I can bring up the first node
> > while asserting to that node that the other 2 are definitely down?
> >
> > Thanks,
> >
> > Matt
> > --
> > Matthew Booth
> > Red Hat OpenStack Engineer, Compute DFG
> >
> > Phone: +442070094448 (UK)
> >
> > _______________________________________________
> > discuss mailing list
> > discuss at openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)



More information about the discuss mailing list