[ovs-discuss] OVSDB Managers vs. ovsdb-server

Matan Rosenberg matan129 at gmail.com
Fri Nov 29 17:03:26 UTC 2019

Thanks for the reply!

I guess that I thought there was distinction between the ovsdb case and the
OpenFlow case  since the latter is held non-persistently by the ovs switch,
so my source of truth will be the app's DB;
also, applying lots (1,000+) of OF rules on a datapath, from my experience,
is done almost instantly by ovs-vswitchd (and the kernel DP), but adding
bridges might
take longer than that, especially when having a large number of bridges*.

So, from an architectural perspective, I'm more worried of the ovsdb(s)
state getting
out of sync with my source of truth, which is my app DB (suppose it's a DB
ACID gurantees, i.e. MySQL).

(*) The actual ovsdb transaction happens quite quickly, but it may take a
lot of time
for ovs-vswitchd to ack that and apply the changes.

How'd you approach having multiple ovsdb-servers which have to be kept in a
(Not replicated, but consistent.)

I suppose that I can get rid of the app DB, and just keep the state
between all the OVS hosts, but that approach is hard to modify or query,
generally because of lack of client libraries for that (Python or Java).
There's Ryu, but it's very  cumbersome to work with when comparing to,
say, any SQL library.

Even more importently, when the state is distributed, it's impossible to
make an
atomic change to the system, because you can't have a distributed
with multiple ovsdb-servers.

Ideally, I'd be best if a single ovsdb-server could serve multiple OVS

How can am ovsdb manager know when ovs-vswitch had applied a change?
(I want it to mimic the "wait" functionality of ovs-vsctl).

On Fri, 29 Nov 2019 at 18:17, Ben Pfaff <blp at ovn.org> wrote:

> On Fri, Nov 29, 2019 at 11:54:13AM +0200, Matan Rosenberg wrote:
> > I'm tying to understand what ovsdb managers actually do.
> >
> > Say that I have two OVS hosts, each with its own ovs-vswitchd and
> > ovsdb-server.
> > Both OVS hosts are logically related, and I want to control them from a
> > single, central location.
> >
> > In the datapath plane, I can set-controller on all bridges on both OVS
> > hosts to the same OpenFlow controller, and thus achieve central
> management.
> OK, yes.
> > In the management plane, I don't see how I can add/remove/manipulate
> > bridges on both OVS hosts from the same location, since each OVS host has
> > its own ovsdb-server.
> That's a strange comment, since each OVS host also has its own
> ovs-vswitchd, which the controller addresses individually.  The
> situation for OpenFlow controllers and OVSDB managers is exactly
> analogous.
> > (In my use case, I happen to add/remove bridges quite frequently).
> >
> > Of course, I can just issue two ovs-vctl command with different DB each
> > time, but this isn't a vert scalable approach, and it also has severe
> > consistency drawbacks, since the network topology is held in two places.
> > This is in direct contrast from the datapath plane, because I can
> actually
> > write an OpenFlow controller that determines the correct rules from, say,
> > an application DB (not ovsdb!).
> No, there's no contrast here.  The situation is exactly the same, since
> the OpenFlow controller and the switch each hold the network topology
> and the controller is responsible for pushing its view down to the switch.
> > I've stumbled upon the ovs-vsctl set-manager command, which, from what I
> > can see, can be used in conjunction with something like an OpenDaylight
> > OVSDB Manager.
> > If so, can a single OVSDB manager manage both OVS hosts?
> Yes.
> > And if so, what part does the local ovsdb-server play, if the network
> > topology is held with the OVSDB manager? Do I still actually need it, or
> > can ovs-vswitchd just directly query the manager?
> The local ovsdb-server is the actual database.  The manager just queries
> and updates it.  This is exactly the same as the OpenFlow situation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20191129/6f7e77c8/attachment.html>

More information about the discuss mailing list