[ovs-dev] [RFC 4/4] Introduce an openvswitch driver for Docker networking.

Gurucharan Shetty shettyg at nicira.com
Mon Jun 22 09:15:50 UTC 2015


Docker committed experimental support for multi-host
networking yesterday. This commit adds a driver that
works with that experimental support. Since Docker
code is not part of any official Docker releases yet,
this patch is sent as a RFC.

Signed-off-by: Gurucharan Shetty <gshetty at nicira.com>
---
 INSTALL.Docker.md                       | 206 ++++++++----
 ovn/utilities/automake.mk               |   3 +-
 ovn/utilities/ovn-docker-overlay-driver | 538 ++++++++++++++++++++++++++++++++
 rhel/openvswitch-fedora.spec.in         |   1 +
 4 files changed, 678 insertions(+), 70 deletions(-)
 create mode 100755 ovn/utilities/ovn-docker-overlay-driver

diff --git a/INSTALL.Docker.md b/INSTALL.Docker.md
index 9e14043..dbc9699 100644
--- a/INSTALL.Docker.md
+++ b/INSTALL.Docker.md
@@ -1,109 +1,177 @@
 How to Use Open vSwitch with Docker
 ====================================
 
-This document describes how to use Open vSwitch with Docker 1.2.0 or
+This document describes how to use Open vSwitch with Docker 1.7.0 or
 later.  This document assumes that you installed Open vSwitch by following
 [INSTALL.md] or by using the distribution packages such as .deb or .rpm.
 Consult www.docker.com for instructions on how to install Docker.
 
-Limitations
------------
-Currently there is no native integration of Open vSwitch in Docker, i.e.,
-one cannot use the Docker client to automatically add a container's
-network interface to an Open vSwitch bridge during the creation of the
-container.  This document describes addition of new network interfaces to an
-already created container and in turn attaching that interface as a port to an
-Open vSwitch bridge.  If and when there is a native integration of Open vSwitch
-with Docker, the ovs-docker utility described in this document is expected to
-be retired.
+Docker 1.7.0 comes with support for multi-host networking.  Integration of
+Docker networking and Open vSwitch can be achieved via Open vSwitch virtual
+network (OVN).
 
 Setup
------
-* Create your container, e.g.:
+=====
+ 
+OVN provides network virtualization to containers.  OVN can create
+logical networks amongst containers running on multiple hosts.  To better
+explain OVN's integration with Docker, this document explains the
+end to end workflow with an example.
+
+* Start a IPAM server.
+
+For multi-host networking, you will need an entity that provides consistent
+IP and MAC addresses to your container interfaces.  One way to achieve this
+is to use a IPAM server that integrates with OVN's Northbound database.
+OpenStack Neutron already has an integration with OVN's Northbound database
+via a OVN plugin and this document uses it as an example.
+
+Installing OpenStack Neutron with OVN plugin from scratch on a server is out
+of scope of this documentation.  Instead this documentation uses a
+Docker image that comes pre-packaged with OpenStack Neutron and OVN's daemons
+as an example.
+
+Start your IPAM server on any host.
+
+```
+docker run -d --net=host --name ipam openvswitch/ipam:v2.4.90 /sbin/ipam
+```
+
+Once you start your container, you can do a 'docker logs -f ipam' to see
+whether the ipam container has started properly.  You should see a log message
+of the following form to indicate a successful start.
+
+```
+oslo_messaging._drivers.impl_rabbit [-] Connecting to AMQP server on localhost:5672
+neutron.wsgi [-] (670) wsgi starting up on http://0.0.0.0:9696/
+INFO oslo_messaging._drivers.impl_rabbit [-] Connected to AMQP server on 127.0.0.1:5672
+```
+
+Note down the IP address of the host. This document refers to this IP address
+in the remainder of the document as $IPAM_IP.
+
+* One time setup.
+
+On each host, where you plan to spawn your containers, you will need to
+create an Open vSwitch integration bridge.
+
+```
+ovn-integrate create-integration-bridge
+```
+
+You will also need to set the IPAM server's IP address.
+
+```
+ovn-integrate set-ipam $IPAM_IP
+```
+
+You will also need to provide the local IP address via which other hosts
+can reach this host. This IP address is referred as the local tunnel endpoint.
+
+```
+ovn-integrate set-tep $LOCAL_IP
+```
+
+By default, OVN uses Geneve tunnels for overlay networks.  If you prefer to use
+STT tunnels (which are known for high throughput capabilities when TSO is
+turned on in your NICs), you can run the following command. (For STT
+tunnels to work, you will need a STT kernel module loaded.  STT kernel
+module does not come as part of the upstream Linux kernel.)
 
 ```
-% docker run -d ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+ovn-integrate set-encap-type stt
 ```
 
-The above command creates a container with one network interface 'eth0'
-and attaches it to a Linux bridge called 'docker0'.  'eth0' by default
-gets an IP address in the 172.17.0.0/16 space.  Docker sets up iptables
-NAT rules to let this interface talk to the outside world.  Also since
-it is connected to 'docker0' bridge, it can talk to all other containers
-connected to the same bridge.  If you prefer that no network interface be
-created by default, you can start your container with
-the option '--net=none', e,g.:
+And finally, start the OVN controller.
 
 ```
-% docker run -d --net=none ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+ovn-controller --pidfile --detach -vconsole:off --log-file
 ```
 
-The above commands will return a container id.  You will need to pass this
-value to the utility 'ovs-docker' to create network interfaces attached to an
-Open vSwitch bridge as a port.  This document will reference this value
-as $CONTAINER_ID in the next steps.
+* Start the Open vSwitch network driver.
 
-* Add a new network interface to the container and attach it to an Open vSwitch
-  bridge.  e.g.:
+By default Docker uses Linux bridge for networking.  But it has support
+for external drivers.  To use Open vSwitch instead of the Linux bridge,
+you will need to start the Open vSwitch driver.
 
-`% ovs-docker add-port br-int eth1 $CONTAINER_ID`
+The Open vSwitch driver uses the Python's flask module to listen to
+Docker's networking api calls.  The driver also uses OpenStack's
+python-neutronclient libraries.  So, if your host does not have Python's
+flask module or python-neutronclient install them with:
 
-The above command will create a network interface 'eth1' inside the container
-and then attaches it to the Open vSwitch bridge 'br-int'.  This is done by
-creating a veth pair.  One end of the interface becomes 'eth1' inside the
-container and the other end attaches to 'br-int'.
+```
+easy_install -U pip
+pip install python-neutronclient
+pip install Flask
+```
 
-The script also lets one to add IP address, MAC address, Gateway address and
-MTU for the interface.  e.g.:
+Start the Open vSwitch driver on every host where you plan to create your
+containers.
 
 ```
-% ovs-docker add-port br-int eth1 $CONTAINER_ID --ipaddress=192.168.1.2/24 \
---macaddress=a2:c3:0d:49:7f:f8 --gateway=192.168.1.1 --mtu=1450
+ovn-docker-overlay-driver
 ```
 
-* A previously added network interface can be deleted.  e.g.:
+Docker has inbuilt primitives that closely match OVN's logical switches
+and logical port concepts.  Please consult Docker's documentation for
+all the possible commands.  Here are some examples.
 
-`% ovs-docker del-port br-int eth1 $CONTAINER_ID`
+* Create your logical switch.
 
-All the previously added Open vSwitch interfaces inside a container can be
-deleted.  e.g.:
+To create a logical switch with name 'foo', run:
 
-`% ovs-docker del-ports br-int $CONTAINER_ID`
+```
+docker network create -d openvswitch foo
+```
 
-It is important that the same $CONTAINER_ID be passed to both add-port
-and del-port[s] commands.
+* List your logical switches.
 
-* More network control.
+```
+docker network ls
+```
 
-Once a container interface is added to an Open vSwitch bridge, one can
-set VLANs, create Tunnels, add OpenFlow rules etc for more network control.
-Many times, it is important that the underlying network infrastructure is
-plumbed (or programmed) before the application inside the container starts.
-To handle this, one can create a micro-container, attach an Open vSwitch
-interface to that container, set the UUIDS in OVSDB as mentioned in
-[IntegrationGuide.md] and then program the bridge to handle traffic coming out
-of that container. Now, you can start the main container asking it
-to share the network of the micro-container. When your application starts,
-the underlying network infrastructure would be ready. e.g.:
+* Create your logical port.
+
+To create a logical port with name 'db' in the network 'foo', run:
 
 ```
-% docker run -d --net=container:$MICROCONTAINER_ID ubuntu:14.04 /bin/sh -c \
-"while true; do echo hello world; sleep 1; done"
+docker service publish db.foo
 ```
 
-Please read the man pages of ovs-vsctl, ovs-ofctl, ovs-vswitchd,
-ovsdb-server and ovs-vswitchd.conf.db etc for more details about Open vSwitch.
+* List all your logical ports.
+
+```
+docker service ls
+```
 
-Docker networking is quite flexible and can be used in multiple ways.  For more
-information, please read:
-https://docs.docker.com/articles/networking
+* Attach your logical port to a container.
 
-Bug Reporting
--------------
+```
+docker service attach CONTAINER_ID db.foo
+```
 
-Please report problems to bugs at openvswitch.org.
+* Detach your logical port from a container.
+
+```
+docker service detach CONTAINER_ID db.foo
+```
+
+Delete your logical port.
+
+```
+docker service unpublish db.foo
+```
+
+* Running commands directly on the IPAM server (bypassing Docker)
+
+Since the above examples shows integration with a OpenStack Neutron
+IPAM server, one can directlty invoke 'neutron' commands to fetch
+information about logical switches and ports. e.g:
+
+```
+export OS_URL="http://$IPAM_IP:9696/"
+export OS_AUTH_STRATEGY="noauth"
+neutron net-list
+```
 
 [INSTALL.md]:INSTALL.md
-[IntegrationGuide.md]:IntegrationGuide.md
diff --git a/ovn/utilities/automake.mk b/ovn/utilities/automake.mk
index 1a1a159..618e051 100644
--- a/ovn/utilities/automake.mk
+++ b/ovn/utilities/automake.mk
@@ -5,7 +5,8 @@ man_MANS += \
     ovn/utilities/ovn-ctl.8
 
 bin_SCRIPTS += \
-    ovn/utilities/ovn-integrate
+    ovn/utilities/ovn-integrate \
+    ovn/utilities/ovn-docker-overlay-driver
 
 EXTRA_DIST += \
     ovn/utilities/ovn-ctl \
diff --git a/ovn/utilities/ovn-docker-overlay-driver b/ovn/utilities/ovn-docker-overlay-driver
new file mode 100755
index 0000000..5076083
--- /dev/null
+++ b/ovn/utilities/ovn-docker-overlay-driver
@@ -0,0 +1,538 @@
+#! /usr/bin/python
+# Copyright (C) 2015 Nicira, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at:
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import atexit
+import json
+import netaddr
+import os
+import shlex
+import subprocess
+import sys
+import uuid
+from neutronclient.v2_0 import client
+from flask import Flask, jsonify
+from flask import request, abort
+
+app = Flask(__name__)
+
+AUTH_STRATEGY = ""
+ENDPOINT_URL = ""
+OVN_BRIDGE = "br-int"
+PLUGIN_DIR = "/usr/share/docker/plugins"
+PLUGIN_FILE = "/usr/share/docker/plugins/openvswitch.spec"
+
+
+def call_popen(cmd):
+    child = subprocess.Popen(cmd, stdout=subprocess.PIPE)
+    output = child.communicate()
+    if child.returncode:
+        raise RuntimeError("Fatal error executing %s" % (cmd))
+    if len(output) == 0 or output[0] == None:
+        output = ""
+    else:
+        output = output[0].strip()
+    return output
+
+
+def call_prog(prog, args_list):
+    cmd = [prog, "-vconsole:off"] + args_list
+    return call_popen(cmd)
+
+
+def ovs_vsctl(args):
+    return call_prog("ovs-vsctl", shlex.split(args))
+
+
+def sanity_check():
+    br_list = ovs_vsctl("list-br").split()
+    if OVN_BRIDGE not in br_list:
+        raise RuntimeError("OVN bridge is not seen")
+
+    global AUTH_STRATEGY, ENDPOINT_URL
+
+    AUTH_STRATEGY = "noauth"
+    endpoint_ip = ovs_vsctl("get Open_vSwitch . "
+                            "external_ids:ipam").strip('"')
+    if not endpoint_ip:
+        raise RuntimeError("ipam server's ip address not set")
+    ENDPOINT_URL = "http://%s:9696/" % (endpoint_ip)
+    os.environ['OS_URL'] = ENDPOINT_URL
+    os.environ['OS_AUTH_STRATEGY'] = "noauth"
+
+
+def cleanup():
+    if os.path.isfile(PLUGIN_FILE):
+        os.remove(PLUGIN_FILE)
+
+
+def init():
+    br_list = ovs_vsctl("list-br").split()
+    if OVN_BRIDGE not in br_list:
+        sys.exit("Bridge %s does not exists" % (OVN_BRIDGE))
+
+    encap_type = ovs_vsctl("--if-exists get Open_vSwitch . "
+                           "external_ids:ovn-encap-type").strip('"')
+    if not encap_type:
+        ovs_vsctl("set open_vswitch . external_ids:ovn-bridge=%s "
+                  "external_ids:ovn-encap-type=geneve" % OVN_BRIDGE)
+
+    if not os.path.isdir(PLUGIN_DIR):
+        sys.exit("No docker plugin directory configured")
+
+    try:
+        fo = open(PLUGIN_FILE, "w")
+        fo.write("tcp://0.0.0.0:5000")
+        fo.close()
+    except Exception as e:
+        sys.exit("Failed to write to spec file (%s)" % (str(e)))
+
+    atexit.register(cleanup)
+
+
+# curl -i -H 'Content-Type: application/json' -X POST
+# http://localhost:5000/Plugin.Activate
+ at app.route('/Plugin.Activate', methods=['POST'])
+def plugin_activate():
+    return jsonify({"Implements": ["NetworkDriver"]})
+
+
+def neutron_login():
+    try:
+        sanity_check()
+        neutron = client.Client(endpoint_url=ENDPOINT_URL,
+                                auth_strategy=AUTH_STRATEGY)
+    except Exception as e:
+        raise RuntimeError("Failed to login into Neutron(%s)" % str(e))
+    return neutron
+
+
+def get_networkuuid_by_name(neutron, name):
+    param = {'fields': 'id', 'name': name}
+    ret = neutron.list_networks(**param)
+    if len(ret['networks']) > 1:
+        raise RuntimeError("More than one network for the given name")
+    elif len(ret['networks']) == 0:
+        network = None
+    else:
+        network = ret['networks'][0]['id']
+    return network
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{"NetworkID":"dummy-network","Options":{"subnet":"192.168.1.0/24"}}'
+# http://localhost:5000/NetworkDriver.CreateNetwork
+ at app.route('/NetworkDriver.CreateNetwork', methods=['POST'])
+def create_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    # NetworkID will have docker generated network uuid and it
+    # becomes 'name' in a neutron network record.
+    network = data.get("NetworkID", "")
+    if not network:
+        abort(400)
+
+    # Docker currently does not let you specify additional arguments. But
+    # plans to in the future. Till then every network is 192.168.0.0/16
+    subnet = "192.168.0.0/16"
+    if not subnet:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    # XXX: Currently, a create-network request from a user on one host ends
+    # up being a create network request in every host.  This is a huge
+    # performance penalty as now we will need to check for the existance
+    # of such a network in every call. So in a 1000 node system, a create
+    # network request from user will have 1000 requests to OVN's IPAM.
+    try:
+        if get_networkuuid_by_name(neutron, network):
+            return jsonify({})
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        body = {'network': {'name': network,
+                            'tenant_id': "admin",
+                            'admin_state_up': True}}
+        ret = neutron.create_network(body)
+        network_id = ret['network']['id']
+    except Exception as e:
+        error = "Failed in neutron api call (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    try:
+        netaddr.IPNetwork(subnet)
+    except Exception as e:
+        neutron.delete_network(network_id)
+        error = "Invalid subnet specified."
+        return jsonify({'Err': error})
+
+    try:
+        body = {'subnet': {'network_id': network_id,
+                           'tenant_id': "admin",
+                           'ip_version': 4,
+                           'cidr': subnet,
+                           'name': network}}
+        ret = neutron.create_subnet(body)
+    except Exception as e:
+        neutron.delete_network(network_id)
+        error = "Failed in neutron api call (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# {"NetworkID":"dummy-network"}
+# http://localhost:5000/NetworkDriver.DeleteNetwork
+ at app.route('/NetworkDriver.DeleteNetwork', methods=['POST'])
+def delete_network():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    network = get_networkuuid_by_name(neutron, nid)
+    if not network:
+        return jsonify({})
+
+    try:
+        neutron.delete_network(network)
+    except Exception as e:
+        error = "Failed in neutron api call (%s)" % str(e)
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+def get_endpointuuid_by_name(neutron, name):
+    param = {'fields': 'id', 'name': name}
+    ret = neutron.list_ports(**param)
+    if len(ret['ports']) > 1:
+        raise RuntimeError("More than one endpoint for the given name")
+    elif len(ret['ports']) == 0:
+        endpoint = None
+    else:
+        endpoint = ret['ports'][0]['id']
+    return endpoint
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{"NetworkID":"dummy-network","EndpointID":"dummy-endpoint","Interfaces":[],"Options":{}}'
+# http://localhost:5000/NetworkDriver.CreateEndpoint
+ at app.route('/NetworkDriver.CreateEndpoint', methods=['POST'])
+def create_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    interfaces = data.get("Interfaces", "")
+    if interfaces:
+        # If interfaces has record, the endpoint has already
+        # been created.
+        return jsonify({})
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    network = get_networkuuid_by_name(neutron, nid)
+    if not network:
+        error = "Failed to get neutron network record for (%s)" % (nid)
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_network(network)
+        subnet = ret['network']['subnets'][0]
+        if not subnet:
+            raise RuntimeError("No subnet defined for the network.")
+    except Exception as e:
+        error = "Could not obtain network information.\n(%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_subnet(subnet)
+        gateway_ip = ret['subnet']['gateway_ip']
+        cidr = ret['subnet']['cidr']
+        netmask = cidr.rsplit('/', 1)[1]
+        if not netmask:
+            raise RuntimeError("No cidr netmask found for subnet")
+    except Exception as e:
+        error = "Could not obtain subnet information (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        body = {'port': {'network_id': network,
+                         'tenant_id': "admin",
+                         'name': eid,
+                         'admin_state_up': True}}
+
+        ret = neutron.create_port(body)
+        mac_address = ret['port']['mac_address']
+        ip_address = "%s/%s" \
+                     % (ret['port']['fixed_ips'][0]['ip_address'], netmask)
+
+    except Exception as e:
+        error = "Failed in neutron port creation call (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({"Interfaces": [{
+                                    "ID": 0,
+                                    "Address": ip_address,
+                                    "AddressIPv6": None,
+                                    "MacAddress": mac_address
+                                    }]})
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{"NetworkID":"dummy-network","EndpointID":"dummy-endpoint"}'
+# http://localhost:5000/NetworkDriver.EndpointOperInfo
+ at app.route('/NetworkDriver.EndpointOperInfo', methods=['POST'])
+def show_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    endpoint = get_endpointuuid_by_name(neutron, eid)
+    if not endpoint:
+        error = "Failed to get endpoint by name"
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_port(endpoint)
+        mac_address = ret['port']['mac_address']
+        ip_address = ret['port']['fixed_ips'][0]['ip_address']
+    except Exception as e:
+        error = "Failed to get endpoint information (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    return jsonify({"Value": {"ip_address": ip_address,
+                              "mac_address": mac_address,
+                              "veth_outside": veth_outside
+                              }})
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{"NetworkID":"dummy-network","EndpointID":"dummy-endpoint"}'
+# http://localhost:5000/NetworkDriver.DeleteEndpoint
+ at app.route('/NetworkDriver.DeleteEndpoint', methods=['POST'])
+def delete_endpoint():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    endpoint = get_endpointuuid_by_name(neutron, eid)
+    if not endpoint:
+        return jsonify({})
+
+    try:
+        neutron.delete_port(endpoint)
+    except Exception as e:
+        error = "Failed to delete endpoint. (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{u'NetworkID': u'dummy-network', u'SandboxKey': u'sandbox-key', \
+#   u'Options': {u'foo': u'fooValue'}, u'EndpointID': u'dummy-endpoint'}'
+# http://localhost:5000/NetworkDriver.Join
+ at app.route('/NetworkDriver.Join', methods=['POST'])
+def network_join():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    sboxkey = data.get("SandboxKey", "")
+    if not sboxkey:
+        abort(400)
+
+    # sboxkey is of the form: /var/run/docker/netns/CONTAINER_ID
+    vm_id = sboxkey.rsplit('/')[-1]
+
+    try:
+        neutron = neutron_login()
+    except Exception as e:
+        error = "%s" % (str(e))
+        return jsonify({'Err': error})
+
+    endpoint = get_endpointuuid_by_name(neutron, eid)
+    if not endpoint:
+        error = "Failed to get endpoint by name"
+        return jsonify({'Err': error})
+
+    try:
+        ret = neutron.show_port(endpoint)
+        mac_address = ret['port']['mac_address']
+    except Exception as e:
+        error = "Failed to get endpoint information (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    veth_outside = eid[0:15]
+    veth_inside = eid[0:13] + "_c"
+    command = "ip link add %s type veth peer name %s" \
+              % (veth_inside, veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "Failed to create veth pair (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set dev %s address %s" \
+              % (veth_inside, mac_address)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "Failed to set veth mac address (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    command = "ip link set %s up" % (veth_outside)
+
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "Failed to up the veth interface (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ovs_vsctl("add-port %s %s" % (OVN_BRIDGE, veth_outside))
+        ovs_vsctl("set interface %s external_ids:attached-mac=%s "
+                  "external_ids:iface-id=%s "
+                  "external_ids:vm-id=%s "
+                  "external_ids:iface-status=%s "
+                  % (veth_outside, mac_address,
+                     endpoint, vm_id, "active"))
+    except Exception as e:
+        error = "Failed to create a port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({"InterfaceNames": [{
+                                        "SrcName": veth_inside,
+                                        "DstPrefix": "eth"
+                                       }],
+                    "Gateway": "",
+                    "GatewayIPv6": "",
+                    "HostsPath": "",
+                    "ResolvConfPath": ""})
+
+
+# curl -i -H 'Content-Type: application/json' -X POST -d
+# '{"NetworkID":"dummy-network","EndpointID":"dummy-endpoint"}'
+# http://localhost:5000/NetworkDriver.Leave
+ at app.route('/NetworkDriver.Leave', methods=['POST'])
+def network_leave():
+    if not request.data:
+        abort(400)
+
+    data = json.loads(request.data)
+
+    nid = data.get("NetworkID", "")
+    if not nid:
+        abort(400)
+
+    eid = data.get("EndpointID", "")
+    if not eid:
+        abort(400)
+
+    veth_outside = eid[0:15]
+    command = "ip link delete %s" % (veth_outside)
+    try:
+        call_popen(shlex.split(command))
+    except Exception as e:
+        error = "Failed to delete veth pair (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    try:
+        ovs_vsctl("--if-exists del-port %s" % (veth_outside))
+    except Exception as e:
+        error = "Failed to delete port (%s)" % (str(e))
+        return jsonify({'Err': error})
+
+    return jsonify({})
+
+if __name__ == '__main__':
+    init()
+    app.run(host='0.0.0.0')
diff --git a/rhel/openvswitch-fedora.spec.in b/rhel/openvswitch-fedora.spec.in
index dfdcdca..2cfe6ce 100644
--- a/rhel/openvswitch-fedora.spec.in
+++ b/rhel/openvswitch-fedora.spec.in
@@ -318,6 +318,7 @@ rm -rf $RPM_BUILD_ROOT
 
 %files ovn
 %{_bindir}/ovn-controller
+%{_bindir}/ovn-docker-overlay-driver
 %{_bindir}/ovn-integrate
 %{_bindir}/ovn-nbctl
 %{_bindir}/ovn-northd
-- 
1.9.1




More information about the dev mailing list