[ovs-dev] 回复: [PATCH] ovsdb: fix data loss when OVSDB replication from itself

克赛号0181 dupf at dtdream.com
Tue Feb 7 01:31:22 UTC 2017


In K8S, Pacemaker is not used usally, we use keepalived to implement the VIP (OVN service endpoint),when master nodes fails, the VIP switched earlier than we notice such event,so, the new master node will connect itself in a short time(about 3-10 seconds).




------------------------------------------------------------------发件人:Guru Shetty <guru at ovn.org>发送时间:2017年2月3日(星期五) 00:50收件人:姜尚0387 <ligs at dtdream.com>抄 送:Andy Zhou <azhou at ovn.org>; ovs-dev <ovs-dev at openvswitch.org>; 克赛号0181 <dupf at dtdream.com>主 题:Re: [ovs-dev] [PATCH] ovsdb: fix data loss when OVSDB replication from itself


On 31 January 2017 at 19:27, Guoshuai Li <ligs at dtdream.com> wrote:



This patch removes IP migration and OVSDB services to promote timing dependency, which occurs at the master / slave exchange time and may not be user-configurable.

If this dependency requires upper-level processing, not every cluster program can easily handle it. For example, building an OVSDB cluster using K8S is difficult to handle this dependency.
Assume that I don't know anything about Kubernetes and OVSDB replication. Can you clearly explain how do you plan to use it with Kubernetes and what is it that does not work right now? 


Replication of OVSDB onto itself seems to be an configuration issue. I don't see

why such configuration is ever useful in practice, and probably should

be blocked

at a higher level.


Is there something special about K8 where OVSDB is expected to

replicating itself?

If so, please explain.


_______________________________________________

dev mailing list
dev at openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev




More information about the dev mailing list