[ovs-dev] [RFC PATCH ovn] Introduce representor port plugging support

Han Zhou zhouhan at gmail.com
Thu Jun 10 06:36:02 UTC 2021


On Thu, May 13, 2021 at 9:25 AM Frode Nordahl <frode.nordahl at canonical.com>
wrote:
>
> On Thu, May 13, 2021 at 5:12 PM Ilya Maximets <i.maximets at ovn.org> wrote:
> >
> > On 5/9/21 4:03 PM, Frode Nordahl wrote:
> > > Introduce plugging module that adds and removes ports on the
> > > integration bridge, as directed by Port_Binding options.
> > >
> > > Traditionally it has been the CMSs responsibility to create Virtual
> > > Interfaces (VIFs) as part of instance (Container, Pod, Virtual
> > > Machine etc.) life cycle, and subsequently manage plug/unplug
> > > operations on the Open vSwitch integration bridge.
> > >
> > > With the advent of NICs connected to multiple distinct CPUs we can
> > > have a topology where the instance runs on one host and Open
> > > vSwitch and OVN runs on a different host, the smartnic CPU.
> > >
> > > The act of plugging and unplugging the representor port in Open
> > > vSwitch running on the smartnic host CPU would be the same for
> > > every smartnic variant (thanks to the devlink-port[0][1]
> > > infrastructure) and every CMS (Kubernetes, LXD, OpenStack, etc.).
> > > As such it is natural to extend OVN to provide this common
> > > functionality through its CMS facing API.
> >
> > Hi, Frode.  Thanks for putting this together, but it doesn't look
> > natural to me.  OVN, AFAIK, never touched physical devices or
> > interacted with the kernel directly.  This change introduces completely
> > new functionality inside OVN.  With the same effect we can run a fully
> > separate service on these smartnic CPUs that will do plugging
> > and configuration job for CMS.  You may even make it independent
> > from a particular CMS by creating a REST API for it or whatever.
> > This will additionally allow using same service for non-OVN setups.
>
> Ilya,
>
> Thank you for taking the time to comment, much appreciated.
>
> Yes, this is new functionality, NICs with separate control plane CPUs
> and isolation from the host are also new, so this is one proposal for
> how we could go about to enable the use of them.
>
> The OVN controller does today get pretty close to the physical realm
> by maintaining patch ports in Open vSwitch based on bridge mapping
> configuration and presence of bridges to physical interfaces. It also
> does react to events of physical interfaces being plugged into the
> Open vSwitch instance it manages, albeit to date some other entity has
> been doing the act of adding the port into the bridge.
>
> The rationale for proposing to use the OVN database for coordinating
> this is that the information about which ports to bind, and where to
> bind them is already there. The timing of the information flow from
> the CMS is also suitable for the task.
>
> OVN relies on OVS library code, and all the necessary libraries for
> interfacing with the kernel through netlink and friends are there or
> would be easy to add. The rationale for using the netlink-devlink
> interface is that it provides a generic infrastructure for these types
> of NICs. So by using this interface we should be able to support most
> if not all of the variants of these cards.
>
>
> Providing a separate OVN service to do the task could work, but would
> have the cost of an extra SB DB connection, IDL and monitors.
>
> I fear it would be quite hard to build a whole separate project with
> its own API, feels like a lot of duplicated effort when the flow of
> data and APIs in OVN already align so well with CMSs interested in
> using this?
>
> > Interactions with physical devices also makes OVN linux-dependent
> > at least for this use case, IIUC.
>
> This specific bit would be linux-specific in the first iteration, yes.
> But the vendors manufacturing and distributing the hardware do often
> have drivers for other platforms, I am sure the necessary
> infrastructure will become available there too over time, if it is not
> there already.
>
> We do currently have platform specific macros in the OVN build system,
> so we could enable the functionality when built on a compatible
> platform.
>
> > Maybe, others has different opinions.
>
> I appreciate your opinion, and enjoy discussing this topic.
>
> > Another though is that there is, obviously, a network connection
> > between the host and smartnic system.  Maybe it's possible to just
> > add an extra remote to the local ovsdb-server so CMS daemon on the
> > host system could just add interfaces over the network connection?
>
> There are a few issues with such an approach. One of the main goals
> with providing and using a NIC with control plane CPUs is having an
> extra layer of security and isolation which is separate from the
> hypervisor host the card happens to share a PCI complex with and draw
> power from. Requiring a connection between the two for operation would
> defy this purpose.
>
> In addition to that, this class of cards provide visibility into
> kernel interfaces, enumeration of representor ports etc. only from the
> NIC control plane CPU side of the PCI complex, this information is not
> provided to the host. So if a hypervisor host CMS agent were to do the
> plugging through a remote ovsdb connection, it would have to
> communicate with something else running on the NIC control plane CPU
> to retrieve the information it needs before it can know what to relay
> back over the ovsdb connection.
>
> --
> Frode Nordahl
>
> > Best regards, Ilya Maximets.

Here are my 2 cents.

Initially I had similar concerns to Ilya, and it seems OVN should stay away
from the physical interface plugging. As a reference, here is how
ovn-kubernetes is doing it without adding anything to OVN:
https://docs.google.com/document/d/11IoMKiohK7hIyIE36FJmwJv46DEBx52a4fqvrpCBBcg/edit?usp=sharing

However, thinking more about it, the proposed approach in this patch just
expands the way how OVN can bind ports, utilizing the communication channel
of OVN (OVSDB connections). If all the information regarding port binding
can be specified by the CMS from NB, then it is not unnatural for
ovn-controller to perform interface binding directly (instead of passively
accepting what is attached by CMS). This kind of information already
existed to some extent - the "requested_chassis" option in OpenStack. Now
it seems this idea is just expanding it to a specific interface. The
difference is that "requested_chassis" is used for validation only, but now
we want to directly apply it. So I think at least I don't have a strong
opinion on the idea.

There are some benefits:
1) The mechanism can be reused by different CMSes, which may simplify CMS
implementation.
2) Compared with the ovn-k8s approach, it reuses OVN's communication
channel, which avoids an extra CMS communication channel on the smart NIC
side. (of course this can be achieved by a connection between the BM and
smart NIC with *restricted* API just to convey the necessary information)

As to the negative side, it would increase OVN's complexity, and as
mentioned by Ilya potentially breaks OVN's platform independence. To avoid
this, I think the *plugging* module itself needs to be independent and
pluggable. It can be extended as independent plugins. The plugin would need
to define what information is needed in LSP's "options", and then implement
corresponding drivers. With this approach, even the regular VIFs can be
attached by ovn-controller if CMS can tell the interface name. Anyway, this
is just my brief thinking.

Thanks,
Han


More information about the dev mailing list