[ovs-dev] [PATCH] ovn: Design and Schema changes for Container integration.

Russell Bryant rbryant at redhat.com
Fri Mar 20 20:56:26 UTC 2015


On 03/19/2015 10:31 AM, Gurucharan Shetty wrote:
> The design was come up after inputs and discussions with multiple
> people, including (in alphabetical order) Aaron Rosen, Ben Pfaff,
> Ganesan Chandrashekhar, Justin Pettit, Russell Bryant and Somik Behera.
> 
> Signed-off-by: Gurucharan Shetty <gshetty at nicira.com>
> ---
>  ovn/CONTAINERS.OpenStack.md |  114 ++++++++++++++++++++++++++
>  ovn/automake.mk             |    4 +-
>  ovn/ovn-architecture.7.xml  |  186 ++++++++++++++++++++++++++++++++++++++-----
>  ovn/ovn-nb.ovsschema        |    6 ++
>  ovn/ovn-nb.xml              |   49 ++++++++++--
>  ovn/ovn.ovsschema           |    6 ++
>  ovn/ovn.xml                 |   58 ++++++++++----
>  7 files changed, 380 insertions(+), 43 deletions(-)
>  create mode 100644 ovn/CONTAINERS.OpenStack.md
> 
> diff --git a/ovn/CONTAINERS.OpenStack.md b/ovn/CONTAINERS.OpenStack.md
> new file mode 100644
> index 0000000..58b5588
> --- /dev/null
> +++ b/ovn/CONTAINERS.OpenStack.md
> @@ -0,0 +1,114 @@
> +Integration of Containers with OVN and OpenStack
> +------------------------------------------------
> +
> +In a multi-tenant environment, creating containers directly on hypervisors
> +has many risks.  A container application can break out and make changes to
> +the Open vSwitch flows and thus impact other tenants.  This document
> +describes creation of containers inside VMs and how they can be made part
> +of the logical networks securely.  The created logical network can include VMs,
> +containers and physical machines as endpoints.  To better understand the
> +proposed integration of containers with OVN and OpenStack, this document
> +describes the end to end workflow with an example.
> +
> +* A OpenStack tenant creates a VM (say VM-A) with a single network interface
> +that belongs to a management logical network.  The VM is meant to host
> +containers.  OpenStack Nova chooses the hypervisor on which VM-A is created.
> +
> +* A Neutron port may have been created in advance and passed in to Nova
> +with the request to create a new VM.  If not, Nova will issue a request
> +to Neutron to create a new port.  The ID of the logical port from
> +Neutron will also be used as the vif-id for the virtual network
> +interface (VIF) of VM-A.
> +
> +* When VM-A is created on a hypervisor, its VIF gets added to the
> +Open vSwitch integration bridge.  This creates a row in the Interface table
> +of the Open_vSwitch database.  As explained in the [IntegrationGuide.md],
> +the vif-id associated with the VM network interface gets added in the
> +external_ids:iface-id column of the newly created row in the Interface table.
> +
> +* Since VM-A belongs to a logical network, it gets an IP address.  This IP
> +address is used to spawn containers (either manually or through container
> +orchestration systems) inside that VM and to monitor the health of the
> +created containers.
> +
> +* The vif-id associated with the VM's network interface can be obtained by
> +making a call to Neutron using tenant credentials.
> +
> +* This flow assumes a component called a "container network plugin".
> +If you take Docker as an example for containers, you could envision
> +the plugin to be either a wrapper around Docker or a feature of Docker itself
> +that understands how to perform part of this workflow to get a container
> +connected to a logical network managed by Neutron.  The rest of the flow
> +refers to this logical component that does not yet exist as the
> +"container network plugin".
> +
> +* All the calls to Neutron will need tenant credentials.  These calls can
> +either be made from inside the tenant VM as part of a container network plugin
> +or from outside the tenant VM (if the tenant is not comfortable using temporary
> +Keystone tokens from inside the tenant VMs).  For simplicity, this document
> +explains the work flow using the former method.
> +
> +* The container hosting VM will need Open vSwitch installed in it.  The only
> +work for Open vSwitch inside the VM is to tag network traffic coming from
> +containers.
> +
> +* When a container needs to be created inside the VM with a container network
> +interface that is expected to be attached to a particular logical switch, the
> +network plugin in that VM chooses any unused VLAN (This VLAN tag only needs to
> +be unique inside that VM.  This limits the number of container interfaces to
> +4096 inside a single VM).  This VLAN tag is stripped out in the hypervisor
> +by OVN and is only useful as a context (or metadata) for OVN.
> +
> +* The container network plugin then makes a call to Neutron to create a
> +logical port.  In addition to all the inputs that a call to create a port in
> +Neutron that are currently needed, it sends the vif-id and the VLAN tag as
> +inputs.
> +
> +* Neutron in turn will verify that the vif-id belongs to the tenant in question
> +and then uses the OVN specific plugin to create a new row in the Logical_Port
> +table of the OVN Northbound Database.  Neutron responds back with an
> +IP address and MAC address for that network interface.  So Neutron becomes
> +the IPAM system and provides unique IP and MAC addresses across VMs and
> +containers in the same logical network.

It's expected that containers will be created and destroyed at a much
faster rate than typically experienced with just VMs.  With that in
mind, seeing Neutron REST API calls in this flow may make people worry
about the increased setup time.  It seems to me that logical ports could
optionally be reused to avoid this cost at the creation of every
container.  Can you think of a reason that would be problematic?

If that sounds OK, how about this as some additional text here:

* The Neutron API call here to create a logical port for the container
could add a relatively significant amount of time to container creation.
 However, an optimization is possible here.  Logical ports could be
created in advance and reused by the container system doing container
orchestration.  Additional Neutron API calls would only be needed if the
port needs to be attached to a different logical

> +* When a container is eventually deleted, the network plugin in that VM
> +will make a call to Neutron to delete that port.  Neutron in turn will
> +delete the entry in the Logical_Port table of the OVN Northbound Database.

If the above text is added, I would change "will make a call" to "may
make a call".

-- 
Russell Bryant



More information about the dev mailing list