[ovs-discuss] ***SPAM*** Re: kubernetes - kubeadm problem with watcher

Sébastien Bernard sbernard at nerim.net
Thu Nov 23 21:22:00 UTC 2017


Le 21/11/2017 à 23:53, Guru Shetty a écrit :
>
>
>     ovn-k8s-watcher is able to look for a token in the external_ids.
>
>     In get_api_params:
>
>         k8s_api_token = ovs_vsctl("--if-exists", "get",
>     "Open_vSwitch", ".",
>     "external_ids:k8s-api-token").strip('"')
>     An then in stream_api function :
>
>         if api_token:
>             headers['Authorization'] = 'Bearer %s' % api_token
>
>     So, it should missing a few configuration parameters (a Role, a
>     serviceaccount, and RoleBinding).
>
>     I'll figure out something from flannel-rbac.yaml. It shouldn't be
>     too different.
>
I found one RBAC in Issue 161 
<https://github.com/openvswitch/ovn-kubernetes/issues/161>. It's working ok.
What I did is :
     kubectl apply -f ovn-rbac.yaml
     TOKEN=$(kubectl get secret/ovn-controller -o yaml |grep token| cut 
-f2 -d : | base64 -d)
     ovs-vsctl set Open_vSwitch . external_ids:k8s-api-token=${TOKEN}
Then ovs-k8s-watcher was able to get all its ressources.
Token is generated as soon as one create a serviceaccount (sa).
sa is then linked to a ClusterRole with a ClusterRoleBinding.


>
> I got a bit of time to try kubeadm. One thing was that the port API 
> server was listening on was 6443. Since it was not using API token, I 
> had to get certificates from kubeconfig. A patch like this would work 
> (after a 'pip install kubernetes'. But the same change is needed at 
> multiple places.
>
> diff --git a/ovn_k8s/common/kubernetes.py b/ovn_k8s/common/kubernetes.py
> index a837111..26f7bdd 100644
> --- a/ovn_k8s/common/kubernetes.py
> +++ b/ovn_k8s/common/kubernetes.py
> @@ -12,6 +12,7 @@
>  # See the License for the specific language governing permissions and
>  # limitations under the License.
>
> +from __future__ import absolute_import
>  import json
>  import requests
>
> @@ -23,6 +24,9 @@ from ovn_k8s.common import exceptions
>  from ovn_k8s.common.util import ovs_vsctl
>  from ovn_k8s.common import variables
>
> +import kubernetes
> +import kubernetes.config
> +
>  CA_CERTIFICATE = config.get_option('k8s_ca_certificate')
>  vlog = ovs.vlog.Vlog("kubernetes")
>
> @@ -161,12 +165,19 @@ def set_pod_annotation(server, namespace, pod, 
> key, value):
>
>
>  def _get_objects(url, namespace, resource_type, resource_id):
> +    kubernetes.config.load_kube_config()
> +    apiclient = kubernetes.config.new_client_from_config()
> +
>      ca_certificate, api_token = _get_api_params()
>
>      headers = {}
>      if api_token:
>          headers['Authorization'] = 'Bearer %s' % api_token
> -    if ca_certificate:
> +
> +    if apiclient.configuration.cert_file:
> +       response = requests.get(url, headers=headers, 
> verify=apiclient.configuration.ssl_ca_cert,
> +  cert=(apiclient.configuration.cert_file, 
> apiclient.configuration.key_file))
> +    elif ca_certificate:
>          response = requests.get(url, headers=headers, 
> verify=ca_certificate)
>      else:
>          response = requests.get(url, headers=headers)
>
>
>
> The client that I used to test was:
>
> import ovn_k8s.common.kubernetes
>
>
> pods = ovn_k8s.common.kubernetes.get_all_pods("https://10.33.75.67:6443")
>
> print pods
>
>
> I need to think about what is a nice way to do this though...
I don't think this is mandatory but nice to have.

I went on trying to setup my cluster. Here's two more problem I found :
1- ovs-k8s-overlay :
   master-init should call _linux-init since there are pods running on 
the node. masters pods are normal pods ran in the kube-system namespace 
on tainted node, they should be configured through cni.

--- bin/ovn-k8s-overlay    2017-11-21 00:04:45.715019656 +0100
+++ /usr/bin/ovn-k8s-overlay    2017-11-22 22:17:11.982682503 +0100
@@ -1,4 +1,4 @@
-#! /usr/bin/python
+#!/usr/bin/python
  # Copyright (C) 2016 Nicira, Inc.
  #
  # Licensed under the Apache License, Version 2.0 (the "License");
@@ -467,6 +467,9 @@
      create_management_port(node_name, args.master_switch_subnet,
                             args.cluster_ip_subnet)

+    if sys.platform != 'win32':
+      args.minion_switch_subnet = args.master_switch_subnet
+      _linux_init(args)

  def minion_init(args):
      fetch_ovn_nb(args)
----------------------------
2- After having run the master-init, kubelet started to report errors :
------------------------------
Nov 22 23:15:23 km1 journal: ovs|  31 | ovn-k8s-cni-overlay | ERR | 
{"cniVersion": "0.1.0", "code": 100, "message": "failed in pod 
annotation key extract"}
Nov 22 23:15:23 km1 kubelet: 2017-11-22T22:15:23Z |  31 | 
ovn-k8s-cni-overlay | ERR | {"cniVersion": "0.1.0", "code": 100, 
"message": "failed in pod annotation key extract"}
Nov 22 23:15:23 km1 kubelet: E1122 23:15:23.626941    2641 cni.go:301] 
Error adding network:
Nov 22 23:15:23 km1 kubelet: E1122 23:15:23.626970    2641 cni.go:250] 
Error while adding to cni network:
------------------------------
It seems to have some more problem with the reading of annotations. 
However, the annotations are present on the pods but for a certain 
reason, the helper seems unable to read them.
Here's an example:
------------------------------
kubectl get pod/etcd-km1 -o yaml
apiVersion: v1
kind: Pod
metadata:
   annotations:
     kubernetes.io/config.hash: d76e26fba3bf2bfd215eb29011d55250
     kubernetes.io/config.mirror: d76e26fba3bf2bfd215eb29011d55250
     kubernetes.io/config.seen: 2017-11-22T22:20:24.276150844+01:00
     kubernetes.io/config.source: file
     ovn: '{"gateway_ip": "10.10.0.1", "ip_address": "10.10.0.5/24", 
"mac_address":
       "0a:00:00:00:00:03"}'
     scheduler.alpha.kubernetes.io/critical-pod: ""
   creationTimestamp: 2017-11-22T21:21:42Z
[snip]
----------------------------
Here is the content of the north configuration :

ovn-nbctl show
switch e832fd69-0e71-49f7-930b-4d005ae3a853 (join)
     port jtor-GR_km1
         type: router
         addresses: ["00:00:00:B4:C3:00"]
         router-port: rtoj-GR_km1
     port jtor-km1
         type: router
         addresses: ["00:00:00:45:2B:BE"]
         router-port: rtoj-km1
switch 67de0349-cd5e-46a6-b952-56c198c07cef (km1)
     port stor-km1
         type: router
         addresses: ["00:00:00:FC:B8:C2"]
         router-port: rtos-km1
     port kube-system_kube-proxy-c9nfg
         addresses: ["dynamic"]
     port kube-system_kube-controller-manager-km1
         addresses: ["dynamic"]
     port kube-system_etcd-km1
         addresses: ["dynamic"]
     port kube-system_kube-apiserver-km1
         addresses: ["dynamic"]
     port kube-system_kube-dns-545bc4bfd4-zpjj6
         addresses: ["dynamic"]
     port k8s-km1
         addresses: ["22:d5:cc:fa:14:b1 10.10.0.2"]
     port kube-system_kube-scheduler-km1
         addresses: ["dynamic"]
switch 6ade5db3-a6dd-45c1-b7ce-5a0e9d608471 (ext_km1)
     port etor-GR_km1
         type: router
         addresses: ["00:0c:29:1f:93:48"]
         router-port: rtoe-GR_km1
     port br-ens34_km1
         addresses: ["unknown"]
router d7d20e30-6505-4848-8361-d80253520a43 (km1)
     port rtoj-km1
         mac: "00:00:00:45:2B:BE"
         networks: ["100.64.1.1/24"]
     port rtos-km1
         mac: "00:00:00:FC:B8:C2"
         networks: ["10.10.0.1/24"]
router aa6e86cf-2fa2-4cad-a301-97b35bed7df9 (GR_km1)
     port rtoj-GR_km1
         mac: "00:00:00:B4:C3:00"
         networks: ["100.64.1.2/24"]
     port rtoe-GR_km1
         mac: "00:0c:29:1f:93:48"
         networks: ["172.16.229.128/24"]
     nat d3767114-dc49-48d0-b462-8c41ba7c5243
         external ip: "172.16.229.128"
         logical ip: "10.10.0.0/16"
         type: "snat"

Port kube-system_etcd-km1 don't seems to have an ip, neither the 
kube-system_kube-dns.
I don't really know why.

Hope this helps moving forward.

S. Bernard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openvswitch.org/pipermail/ovs-discuss/attachments/20171123/8804d09e/attachment-0001.html>


More information about the discuss mailing list