<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Le 21/11/2017 à 23:53, Guru Shetty a écrit :<br>
    <blockquote type="cite"
cite="mid:CAM_3v9KL=BT=WuUN1=1d+EGLF55At+m9rSz=w8_Wsshs6xuYFg@mail.gmail.com">
      <div dir="ltr"><br>
        <div class="gmail_extra">
          <div class="gmail_quote">
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
              0.8ex;border-left:1px solid
              rgb(204,204,204);padding-left:1ex">
              <div bgcolor="#FFFFFF"><br>
                <p>ovn-k8s-watcher is able to look for a token in the
                  external_ids. <br>
                </p>
                <p>In get_api_params:</p>
                <p>    k8s_api_token = ovs_vsctl("--if-exists", "get",
                  "Open_vSwitch", ".",<br>
                                               
                  "external_ids:k8s-api-token").<wbr>strip('"')<br>
                  An then in stream_api function :</p>
                <p>    if api_token:<br>
                          headers['Authorization'] = 'Bearer %s' %
                  api_token<br>
                  <br>
                  So, it should missing a few configuration parameters 
                  (a Role, a serviceaccount, and RoleBinding).</p>
                <p>I'll figure out something from flannel-rbac.yaml. It
                  shouldn't be too different.</p>
              </div>
            </blockquote>
          </div>
        </div>
      </div>
    </blockquote>
    I found one RBAC in <a moz-do-not-send="true"
      href="https://github.com/openvswitch/ovn-kubernetes/issues/161">Issue
      161</a>. It's working ok.<br>
    What I did is :<br>
        kubectl apply -f ovn-rbac.yaml<br>
        TOKEN=$(kubectl get secret/ovn-controller -o yaml |grep token|
    cut -f2 -d : | base64 -d)<br>
        ovs-vsctl set Open_vSwitch . external_ids:k8s-api-token=${TOKEN}<br>
    Then ovs-k8s-watcher was able to get all its ressources.<br>
    Token is generated as soon as one create a serviceaccount (sa).<br>
    sa is then linked to a ClusterRole with a ClusterRoleBinding.<br>
    <br>
    <br>
    <blockquote type="cite"
cite="mid:CAM_3v9KL=BT=WuUN1=1d+EGLF55At+m9rSz=w8_Wsshs6xuYFg@mail.gmail.com">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <div>I got a bit of time to try kubeadm. One thing was that
              the port API server was listening on was 6443. Since it
              was not using API token, I had to get certificates from
              kubeconfig. A patch like this would work (after a 'pip
              install kubernetes'. But the same change is needed at
              multiple places.</div>
            <div><br>
            </div>
            <div>
              <div>diff --git a/ovn_k8s/common/kubernetes.py
                b/ovn_k8s/common/kubernetes.py</div>
              <div>index a837111..26f7bdd 100644</div>
              <div>--- a/ovn_k8s/common/kubernetes.py</div>
              <div>+++ b/ovn_k8s/common/kubernetes.py</div>
              <div>@@ -12,6 +12,7 @@</div>
              <div> # See the License for the specific language
                governing permissions and</div>
              <div> # limitations under the License.</div>
              <div><br>
              </div>
              <div>+from __future__ import absolute_import</div>
              <div> import json</div>
              <div> import requests</div>
              <div><br>
              </div>
              <div>@@ -23,6 +24,9 @@ from ovn_k8s.common import
                exceptions</div>
              <div> from ovn_k8s.common.util import ovs_vsctl</div>
              <div> from ovn_k8s.common import variables</div>
              <div><br>
              </div>
              <div>+import kubernetes</div>
              <div>+import kubernetes.config</div>
              <div>+</div>
              <div> CA_CERTIFICATE =
                config.get_option('k8s_ca_certificate')</div>
              <div> vlog = ovs.vlog.Vlog("kubernetes")</div>
              <div><br>
              </div>
              <div>@@ -161,12 +165,19 @@ def set_pod_annotation(server,
                namespace, pod, key, value):</div>
              <div><br>
              </div>
              <div><br>
              </div>
              <div> def _get_objects(url, namespace, resource_type,
                resource_id):</div>
              <div>+    kubernetes.config.load_kube_config()</div>
              <div>+    apiclient =
                kubernetes.config.new_client_from_config()</div>
              <div>+</div>
              <div>     ca_certificate, api_token = _get_api_params()</div>
              <div><br>
              </div>
              <div>     headers = {}</div>
              <div>     if api_token:</div>
              <div>         headers['Authorization'] = 'Bearer %s' %
                api_token</div>
              <div>-    if ca_certificate:</div>
              <div>+</div>
              <div>+    if apiclient.configuration.cert_file:</div>
              <div>+       response = requests.get(url, headers=headers,
                verify=apiclient.configuration.ssl_ca_cert,</div>
              <div>+                             
                 cert=(apiclient.configuration.cert_file,
                apiclient.configuration.key_file))</div>
              <div>+    elif ca_certificate:</div>
              <div>         response = requests.get(url,
                headers=headers, verify=ca_certificate)</div>
              <div>     else:</div>
              <div>         response = requests.get(url,
                headers=headers)</div>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>The client that I used to test was:</div>
            <div><br>
            </div>
            <div>
              <div>import ovn_k8s.common.kubernetes</div>
              <div><br>
              </div>
              <div><br>
              </div>
              <div>pods = ovn_k8s.common.kubernetes.get_all_pods("<a
                  href="https://10.33.75.67:6443" moz-do-not-send="true">https://10.33.75.67:6443</a>")</div>
              <div><br>
              </div>
              <div>print pods</div>
            </div>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>I need to think about what is a nice way to do this
              though...</div>
          </div>
        </div>
      </div>
    </blockquote>
    I don't think this is mandatory but nice to have.<br>
    <br>
    I went on trying to setup my cluster. Here's two more problem I
    found :<br>
    1- ovs-k8s-overlay :<br>
      master-init should call _linux-init since there are pods running
    on the node. masters pods are normal pods ran in the kube-system
    namespace on tainted node, they should be configured through cni.<br>
    <br>
    --- bin/ovn-k8s-overlay    2017-11-21 00:04:45.715019656 +0100<br>
    +++ /usr/bin/ovn-k8s-overlay    2017-11-22 22:17:11.982682503 +0100<br>
    @@ -1,4 +1,4 @@<br>
    -#! /usr/bin/python<br>
    +#!/usr/bin/python<br>
     # Copyright (C) 2016 Nicira, Inc.<br>
     #<br>
     # Licensed under the Apache License, Version 2.0 (the "License");<br>
    @@ -467,6 +467,9 @@<br>
         create_management_port(node_name, args.master_switch_subnet,<br>
                                args.cluster_ip_subnet)<br>
     <br>
    +    if sys.platform != 'win32':<br>
    +      args.minion_switch_subnet = args.master_switch_subnet<br>
    +      _linux_init(args)<br>
     <br>
     def minion_init(args):<br>
         fetch_ovn_nb(args)<br>
    ----------------------------<br>
    2- After having run the master-init, kubelet started to report
    errors :<br>
    ------------------------------<br>
    Nov 22 23:15:23 km1 journal: ovs|  31 | ovn-k8s-cni-overlay | ERR |
    {"cniVersion": "0.1.0", "code": 100, "message": "failed in pod
    annotation key extract"}<br>
    Nov 22 23:15:23 km1 kubelet: 2017-11-22T22:15:23Z |  31 |
    ovn-k8s-cni-overlay | ERR | {"cniVersion": "0.1.0", "code": 100,
    "message": "failed in pod annotation key extract"}<br>
    Nov 22 23:15:23 km1 kubelet: E1122 23:15:23.626941    2641
    cni.go:301] Error adding network:<br>
    Nov 22 23:15:23 km1 kubelet: E1122 23:15:23.626970    2641
    cni.go:250] Error while adding to cni network:<br>
    ------------------------------<br>
    It seems to have some more problem with the reading of annotations.
    However, the annotations are present on the pods but for a certain
    reason, the helper seems unable to read them.<br>
    Here's an example:<br>
    ------------------------------<br>
    kubectl get pod/etcd-km1 -o yaml<br>
    apiVersion: v1<br>
    kind: Pod<br>
    metadata:<br>
      annotations:<br>
        kubernetes.io/config.hash: d76e26fba3bf2bfd215eb29011d55250<br>
        kubernetes.io/config.mirror: d76e26fba3bf2bfd215eb29011d55250<br>
        kubernetes.io/config.seen: 2017-11-22T22:20:24.276150844+01:00<br>
        kubernetes.io/config.source: file<br>
    <font color="#cc0000">    ovn: '{"gateway_ip": "10.10.0.1",
      "ip_address": "10.10.0.5/24", "mac_address":<br>
            "0a:00:00:00:00:03"}'<br>
    </font>    scheduler.alpha.kubernetes.io/critical-pod: ""<br>
      creationTimestamp: 2017-11-22T21:21:42Z<br>
    [snip]<br>
    ----------------------------<br>
    Here is the content of the north configuration :<br>
    <pre>ovn-nbctl show
switch e832fd69-0e71-49f7-930b-4d005ae3a853 (join)
    port jtor-GR_km1
        type: router
        addresses: ["00:00:00:B4:C3:00"]
        router-port: rtoj-GR_km1
    port jtor-km1
        type: router
        addresses: ["00:00:00:45:2B:BE"]
        router-port: rtoj-km1
switch 67de0349-cd5e-46a6-b952-56c198c07cef (km1)
    port stor-km1
        type: router
        addresses: ["00:00:00:FC:B8:C2"]
        router-port: rtos-km1
    port kube-system_kube-proxy-c9nfg
        addresses: ["dynamic"]
    port kube-system_kube-controller-manager-km1
        addresses: ["dynamic"]
    port kube-system_etcd-km1
        addresses: ["dynamic"]
    port kube-system_kube-apiserver-km1
        addresses: ["dynamic"]
    port kube-system_kube-dns-545bc4bfd4-zpjj6
        addresses: ["dynamic"]
    port k8s-km1
        addresses: ["22:d5:cc:fa:14:b1 10.10.0.2"]
    port kube-system_kube-scheduler-km1
        addresses: ["dynamic"]
switch 6ade5db3-a6dd-45c1-b7ce-5a0e9d608471 (ext_km1)
    port etor-GR_km1
        type: router
        addresses: ["00:0c:29:1f:93:48"]
        router-port: rtoe-GR_km1
    port br-ens34_km1
        addresses: ["unknown"]
router d7d20e30-6505-4848-8361-d80253520a43 (km1)
    port rtoj-km1
        mac: "00:00:00:45:2B:BE"
        networks: ["100.64.1.1/24"]
    port rtos-km1
        mac: "00:00:00:FC:B8:C2"
        networks: ["10.10.0.1/24"]
router aa6e86cf-2fa2-4cad-a301-97b35bed7df9 (GR_km1)
    port rtoj-GR_km1
        mac: "00:00:00:B4:C3:00"
        networks: ["100.64.1.2/24"]
    port rtoe-GR_km1
        mac: "00:0c:29:1f:93:48"
        networks: ["172.16.229.128/24"]
    nat d3767114-dc49-48d0-b462-8c41ba7c5243
        external ip: "172.16.229.128"
        logical ip: "10.10.0.0/16"
        type: "snat"

</pre>
    Port kube-system_etcd-km1 don't seems to have an ip, neither the
    kube-system_kube-dns.<br>
    I don't really know why.<br>
    <br>
    Hope this helps moving forward.<br>
    <br>
    S. Bernard<br>
  </body>
</html>