[ovs-dev] [RFC] Design for OVN Kubernetes integration.

Han Zhou zhouhan at gmail.com
Tue Nov 10 18:50:10 UTC 2015


On Tue, Nov 10, 2015 at 8:19 AM, Gurucharan Shetty <shettyg at nicira.com>
wrote:

> On Mon, Nov 9, 2015 at 11:55 PM, Han Zhou <zhouhan at gmail.com> wrote:
> > Hi Guruchanran,
> >
> > Thanks for your work!
> >
> > On Wed, Oct 21, 2015 at 2:53 PM, Gurucharan Shetty <shettyg at nicira.com>
> > wrote:
> >>
> >>
> >> +
> >> +OVN provides network virtualization to containers.  OVN's integration
> >> with
> >> +Kubernetes works in two modes - the "underlay" mode or the "overlay"
> >> mode.
> >> +
> >
> >
> > Could you help briefly describe what are the scenario, pros & cons of
> each
> > mode?
>
> OVN is a pure networking story and the idea is to integrate with as
> many use cases as possible.
>
> Some use cases for the underlay mode.
>
> 1. One big use case (after speaking to many potential and current
> deployers of containers) is the ability to have seamless connectivity
> between your existing services (in VMs and Physical machines) with
> your new applications running in your containers in a k8 cluster. This
> means that you need a way to connect your containers running in a k8
> cluster on top of VMs access your physical machines and other VMs.
> Doing something like this in a secure way needs the support for
> "underlay" mode.
>
> 2. If you take GCE, the way they are able to provide a pool of IP
> addresses to your VMs, is via the support in their underlay. i.e. they
> create tunnels in their underlay, but don't create tunnels inside
> their VMs. This lets them do seamless integration with external
> loadbalancers for north-south traffic as well as east-west traffic for
> other services. WIth OVN, we provide the same richness.
>
> 3. k8s is not inherently multi-tenant (yet). If you have a enterprise
> OpenStack cloud, it already provides multi-tenancy. If you use that as
> your IAAS layer, then you can have multiple k8 clusters for different
> tenants and not worry about overlapping ip addresses inside a single
> tenant. In this mode, you don't have to worry about overstepping on
> your compute resources by multiple container schedulers. So in the
> same cloud, you can have k8, Mesos, Swarm etc running in parallel.
>
> 4. If you consider containers to be inherently insecure (many people
> currently do), it makes sense to only run them inside VMs and not on
> baremetal. This is because even if a container app breaks out, they
> don't have access to your entire datacenter.
>
>
> Use case for the overlay mode.
>
> If you just want to run your cluster in a public cloud, "underlay"
> mode is out of question. OVN still has a good role to play as it can
> provide network connectivity, light weight security, you can enforce
> policies for clean separation between dev/qa workloads etc.
>
>
This makes it much clearer. Thanks and would appreciate if it is captured
in the document.

In addition I would suggest clearly explain the terminology "underlay" and
"overlay" in the document. "underlay" mode in this context actually means
k8s ports are running at same logical layer as the nodes that host the
containers. The host nodes themselves can run in "overlay" networks
provisioned on OVN. Without specific explain it may create confusion for
first time readers.


>
> >
> >>
> >>
> >> +We then create 2^(32-x) logical ports for that logical switch (with the
> >> parent
> >> +port being the VIF_ID of the hosting VM).  On the worker node, for each
> >> +logical port we write data into the local Open vSwitch database to
> >> +act as a cache of ip address, its associated mac address and port uuid.
> >> +
> >> +The value 'x' chosen depends on the number of CPUs and memory available
> >> +in the VM.
> >
> >
> > Well, this 'x' might be hard to pre-define. We might have to end up with
> > reserving big enough subnets for a host to be able to host small pods.
> But
> > that would mean waste of IP space.
> > Of course it is not an issue in deployments where IP space is adequate.
>
>
> Writing a network plugin is quite easy for k8. So people with specific
> deployment models will write their own network plugins.
> For this case, the thought process is that OVN IP addresses are
> virtual. So IP addresses is not really in short supply.
>
> This suggests the best practise is to run the host nodes themselves in
overlay mode (with virtual IPs).
It makes sense because otherwise if running in bridged mode it may not need
OVN or any other overlay based SDN to be deployed in the first place.

>


> >
> >>
> >> +Since one of the k8 requirements is that each pod in a cluster is able
> to
> >> +talk to every other pod in the cluster via IP address, the above
> >> architecture
> >> +with interconnected logical switches via a logical router acts as the
> >
> >
> > This ideal sounds good. But I have a concern about the scalability. For
> > example, 1000 logical switches (for 1000 hosts in a cluster) connects to
> a
> > single logical router. Would this scale?
>
> I don't know the scale implications as we are just getting started. k8
> talks about a 100 node cluster as supportable scale (it was
> re-iterated by k8s developers yesterday in kubecon. They intend to
> increase the scale goals, but they don't want to promise the moon) .
> In yesterday's talk in kubecon ebay mentioned that they too have a
> single router connected to multiple logical switches in a large
> cluster (1000ish nodes). They did talk about scale implications, but
> it is not really clear where the bottleneck is.
>
>
Let's keep this in mind and see what the scale we can achieve with ovn :)

Acked-by: Han Zhou <zhouhan at gmail.com>



More information about the dev mailing list