[ovs-dev] [RFC PATCH ovn 02/10] ovn-architecture: Add documentation for OVN interconnection feature.

Han Zhou zhouhan at gmail.com
Fri Sep 27 22:34:17 UTC 2019


From: Han Zhou <hzhou8 at ebay.com>

Signed-off-by: Han Zhou <hzhou8 at ebay.com>
---
 ovn-architecture.7.xml | 107 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 106 insertions(+), 1 deletion(-)

diff --git a/ovn-architecture.7.xml b/ovn-architecture.7.xml
index 7966b65..56b2167 100644
--- a/ovn-architecture.7.xml
+++ b/ovn-architecture.7.xml
@@ -1246,7 +1246,14 @@
   <p>
     <dfn>Distributed gateway ports</dfn> are logical router patch ports
     that directly connect distributed logical routers to logical
-    switches with localnet ports.
+    switches with external connection.
+  </p>
+
+  <p>
+    There are two types of external connections.  Firstly, connection to
+    physical network through a localnet port.  Secondly, connection to
+    another OVN deployment, which will be introduced in section "OVN
+    Deployments Interconnection".
   </p>
 
   <p>
@@ -1801,6 +1808,104 @@
     </li>
   </ol>
 
+  <h2>OVN Deployments Interconnection (TODO)</h2>
+
+  <p>
+    It is not uncommon for an operator to deploy multiple OVN clusters, for
+    two main reasons.  Firstly, an operator may prefer to deploy one OVN
+    cluster for each availability zone, e.g. in different physical regions,
+    to avoid single point of failure.  Secondly, there is always an upper limit
+    for a single OVN control plane to scale.
+  </p>
+
+  <p>
+    Although the control planes of the different availability zone (AZ)s are
+    independent from each other, the workloads from different AZs may need
+    to communicate across the zones.  The OVN interconnection feature provides
+    a native way to interconnect different AZs by L3 routing through transit
+    overlay networks between logical routers of different AZs.
+  </p>
+
+  <p>
+    A global OVN Interconnection Northbound database is introduced for the
+    operator (probably through CMS systems) to configure transit logical
+    switches that connect logical routers from different AZs.  A transit
+    switch is similar as a regular logical switch, but it is used for
+    interconnection purpose only.  Typically, each transit switch can be used
+    to connect all logical routers that belong to same tenant across all AZs.
+  </p>
+
+  <p>
+    A dedicated daemon process <code>ovn-ic</code>, OVN interconnection
+    controller, in each AZ will consume this data and populate corresponding
+    logical switches to their own northbound databases for each AZ, so that
+    logical routers can be connected to the transit switch by creating
+    patch port pairs in their northbound databases.  Any router ports
+    connected to the transit switches are considered interconnection ports,
+    which will be exchanged between AZs.
+  </p>
+
+  <p>
+    Physically, when workloads in from different AZs communicate, packets
+    need to go through multiple hops: source chassis, source gateway,
+    destination gateway and destination chassis.  All these hops are connected
+    through tunnels so that the packets never leave overlay networks.
+    A distributed gateway port is required to connect the logical router to a
+    transit switch, with a gateway chassis specified, so that the traffic can
+    be forwarded through the gateway chassis.
+  </p>
+
+  <p>
+    A global OVN Interconnection Southbound database is introduced for
+    exchanging control plane information between the AZs.  The data in
+    this database is populated and consumed by the <code>ovn-ic</code>,
+    of each AZ.  The main information in this database includes:
+  </p>
+
+  <ul>
+    <li>
+      Datapath bindings for transit switches, which mainly contains the tunnel
+      keys generated for each transit switch.  Separate key ranges are reserved
+      for transit switches so that they will never conflict with any tunnel
+      keys locally assigned for datapaths within each AZ.
+    </li>
+    <li>
+      Availability zones, which are registerd by <code>ovn-ic</code>
+      from each AZ.
+    </li>
+    <li>
+      Gateways.  Each AZ specifies chassises that are supposed to work
+      as interconnection gateways, and the <code>ovn-ic</code> will
+      populate this information to the interconnection southbound DB.
+      The <code>ovn-ic</code> from all the other AZs will learn the
+      gateways and populate to their own southbound DB as a chassis.
+    </li>
+    <li>
+      Port bindings for logical switch ports created on the transit switch.
+      Each AZ maintains their logical router to transit switch connections
+      independently, but <code>ovn-ic</code> automatically populates
+      local port bindings on transit switches to the global interconnection
+      southbound DB, and learns remote port bindings from other AZs back
+      to its own northbound and southbound DBs, so that logical flows
+      can be produced and then translated to OVS flows locally, which finally
+      enables data plane communication.
+    </li>
+  </ul>
+
+  <p>
+    The tunnel keys for transit switch datapaths and related port bindings
+    must be agreed across all AZs.  This is ensured by generating and storing
+    the keys in the global interconnection southbound database.  Any
+    <code>ovn-ic</code> from any AZ can allocate the key, but race conditions
+    are solved by enforcing unique index for the column in the database.
+  </p>
+
+  <p>
+    Once each AZ's NB and SB databases are populated with interconnection
+    switches and ports, and agreed upon the tunnel keys, data plane
+    communication between the AZs are established.
+  </p>
+
   <h2>Native OVN services for external logical ports</h2>
 
   <p>
-- 
2.1.0



More information about the dev mailing list