Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing Genie CNI into AWS EKS with Kubernetes 1.18 fails. #220

Open
darnone opened this issue Feb 6, 2021 · 6 comments
Open

Installing Genie CNI into AWS EKS with Kubernetes 1.18 fails. #220

darnone opened this issue Feb 6, 2021 · 6 comments

Comments

@darnone
Copy link

darnone commented Feb 6, 2021

I am trying to install Genie in an AWS EKS cluster with Kubernetes 1.1. The command:

kubectl apply -f https://raw.githubusercontent.com/Shogan/terraform-eks-with-weave/master/src/weave/genie-plugin.yaml

Fails with:
unable to recognize "https://raw.githubusercontent.com/Shogan/terraform-eks-with-weave/master/src/weave/genie-plugin.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/Shogan/terraform-eks-with-weave/master/src/weave/genie-plugin.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

So extracting the yaml to change the apiVersion from extensions/v1beta1 to apps/v1 still fails with a different error:

error: error validating "genie-cni.yaml": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

These errors occur with 1.18, 1,17, 1.16 but works in 1.15. Am I the only one that is possible having this problem?

  • David
@sleerssen
Copy link

Hi @darnone, try adding the following to the cni_genie_network_config:

        "cniVersion": "0.3.1",

We had the same issue moving from 1.15 to 1.16 (although we are using cilium). That's been working as well in 1.17, although we still have an issue with cni-genie leaking IPs (issue #214)

@shinebayar-g
Copy link

shinebayar-g commented Feb 7, 2021

@darnone

add this to spec of genie-network-admission-controller DaemonSet and use apps/v1 apiVersion.

spec:
  selector:
    matchLabels:
      role: genie-network-admission-controller

That should fix your issue, but tbh I'm not sure what this daemonset does . Looks like this daemonset is configured to run on masters only. Since we're on EKS it doesn't schedule at all. See: #215 for my issue.

@darnone
Copy link
Author

darnone commented Feb 8, 2021

You never did get an answer to your issue #215 . So you moved that DaemonSet altogether and it still works? I see your point, since this is AWS, the control plane is managed to there is no real way to schedule anything on the master since you cannot able it. Right? Anyway, where exactly do I place the matchLabels? The spec of the deamonset, or the template, or the service? Like this?

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: genie-network-admission-controller
  namespace: kube-system
spec:
  selector:
    matchLabels:
      role: genie-network-admission-controller
  template:
    metadata:
      labels:
        role: genie-network-admission-controller
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: node.kubernetes.io/not-ready
        effect: NoSchedule
        operator: Exists
      - key: CriticalAddonsOnly
        operator: Exists
      nodeSelector:
        node-role.kubernetes.io/master: ""
      hostNetwork: true
      serviceAccountName: genie-plugin
      containers:
        - name: genie-network-admission-controller
          image: quay.io/huawei-cni-genie/genie-admission-controller:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8000

@shinebayar-g
Copy link

shinebayar-g commented Feb 8, 2021

I assume you're trying to use this one? Take a look at correct deployment manifests here.

I don't know we should deploy genie-network-admission-controller on worker nodes.

@darnone
Copy link
Author

darnone commented Feb 9, 2021

Thanks for the pointer. Sorry for my typos - I meant to say "So you removed that DaemonSet altogether and it still works?" If it is targeting the master and does not do anything is it safe to remove it? - David

@shinebayar-g
Copy link

Sorry for the delayed response. We have genie-network-admission-controller daemonset. But it doesn't have any running pods, since it's targeting master nodes. I think it's probably safe to remove that daemonset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants