[ovs-dev] OVN and OpenStack Provider Networks

Salvatore Orlando salv.orlando at gmail.com
Tue Jun 23 21:58:25 UTC 2015


I'm afraid I have to start bike shedding on this thread too.
Apparently that's what I do best.

More inline,
Salvatore

On 23 June 2015 at 23:23, Russell Bryant <rbryant at redhat.com> wrote:

> On 06/23/2015 05:10 PM, Ben Pfaff wrote:
> > On Tue, Jun 23, 2015 at 04:54:20PM -0400, Russell Bryant wrote:
> >> On 06/23/2015 04:17 PM, Ben Pfaff wrote:
> >>> On Mon, Jun 22, 2015 at 02:34:07PM -0400, Russell Bryant wrote:
> >>>> On 06/15/2015 08:00 PM, Ben Pfaff wrote:
> >>>>> On Wed, Jun 10, 2015 at 03:13:54PM -0400, Russell Bryant wrote:
> >>>>>> Provider Networks
> >>>>>> =================
> >>>>>>
> >>>>>> OpenStack Neutron currently has a feature referred to as "provider
> >>>>>> networks".  This is used as a way to define existing physical
> networks
> >>>>>> that you would like to integrate into your environment.
> >>>>>>
> >>>>>> In the simplest case, it can be used in environments where they
> have no
> >>>>>> interest in tenant networks.  Instead, they want all VMs hooked up
> >>>>>> directly to a pre-defined network in their environment.  This use
> case
> >>>>>> is actually popular for private OpenStack deployments.
> >
> > [...]
> >
> >>> I had to read this several times, but maybe I understand it now.  Let
> me
> >>> recap for verification.
> >>>
> >>> A "tenant network" is what OVN calls a logical network.  OVN can
> >>> construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
> >>> Tenant networks can be connected to physical networks via OVN gateways.
> >>>
> >>> A "provider network" is just a physical L2 network (possibly
> >>> VLAN-tagged).  In such a network, on the sending side, OVN would rely
> on
> >>> normal L2 switching for packets to reach their destinations, and on the
> >>> receiving side, OVN would not have a reliable way to determine the
> >>> source of a packet (it would have to infer it from the source MAC).  Is
> >>> that accurate?
> >>
>

While this is correct, it is also restrictive - as that would imply that a
"provider network" is just a physical L2 segment on the data centre.
Therefore logical ports on a provider networks would be pretty much pass
through to the physical network. While it is correct that they might be
mapped to OVS ports on a bridge doings plain L2 forwarding onto a physical
network, this does not mean that L2 forwarding is the only thing that one
can do on provider networks.

A provider network is, from the neutron perspective, exactly like any other
logical network, including tenant networks. What changes are bindings (or
mappings, I don't know what's the correct OVN terminology). These bindings
define three aspects:
1 - the transport type (VLAN, GRE, STT, VxLAN, etc)
2 - the physical network, if any
3 - the segmentation id on the physical network, if any,

For tenant networks, bindings are implicit and depend on what the control
plan defaults to. As Ben was suggesting this could STT or Geneve.
For provider networks, these bindings are explicit, as the admin defines
them. For instance I want this network to be mapped to VLAN 666 on physical
network MEH.

In practical terms with provider networks the control plane must honour the
specification made in the neutron request concerning transport bindings for
the logical networks. If it can't honour these mapping - for instance if it
does not support the select transport type - it must return an error.
Nevertheless the control plane still treats provider networks like any
other network. You can have services like DHCP on them (even if often is
not a great idea), apply security groups to its ports, uplink them to
logical routers, and so on.



> >> Yes, all of that matches my understanding of things.
> >>
> >> I worry that not being able to explain it well might mean I don't have
> >> it all right, so I hope some other Neutron devs chime in to confirm, as
> >> well.
> >
> > OK, let's go on then.
> >
> > Some more recap, on the reason why this would need to be in OVN.  If I'm
> > following, that's because users are likely to want to have VMs that
> > connect both to provider networks and to tenant networks on the same
> > hypervisor, and that means that they need Neutron plugins for each of
> > those, and there's naturally a reluctance to install the bits for two
> > different plugins on every hypervisor.  Is that correct?  If it is, then
> > I'll go back and reread the ideas we had elsewhere in this thread; I'm
> > better equipped to understand them now.
>

I believe people would love the idea of being able to deploy multiple
plugins on the same neutron deployments and handle some kind of networks
with one plugin and other kind of networks with the other plugin.
Unfortunately Neutron cannot yet quite do that, unless we add some
machinery into the ML2 plugin.

One reason I see for having them in OVN is that these providers networks
are not isolated from the rest of the logical network topology. You should
still be able to apply security groups to them or uplink them a logical
router, as per my previous comment. This is not necessarily impossible with
different plugins, but it would probably be more efficient is entirely
handled through OVN.


>
> That is correct, yes.
>
> --
> Russell Bryant
>



More information about the dev mailing list