Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix registry addon on Docker Driver with Containerd Runtime #10788

Open
medyagh opened this issue Mar 11, 2021 · 3 comments
Open

fix registry addon on Docker Driver with Containerd Runtime #10788

medyagh opened this issue Mar 11, 2021 · 3 comments
Labels
area/registry registry related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@medyagh
Copy link
Member

medyagh commented Mar 11, 2021

a follow up to this PR
#10782
and this issue #10778

the PR #10782 fixed VM by enabling portmap the Bridge.
but KIC still has problem because sometimes Kindnet does not get picked up (do to the naming)

VM:

$ ls /etc/cni/net.d/
1-k8s.conf  87-podman-bridge.conflist

Kic:

docker@minikube:~$ ls /etc/cni/net.d/
1-k8s.conf  100-crio-bridge.conf  200-loopback.conf  87-podman-bridge.conflist

The error can be seen in gopogh report
https://storage.googleapis.com/minikube-builds/logs/10782/71145bc/Docker_Linux_containerd.html#fail_TestAddons%2fparallel%2fRegistry

important slack chat with @afbjorklund

https://kubernetes.slack.com/archives/C1F5CT6Q1/p1615410452004400

@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 11, 2021
@afbjorklund afbjorklund added the area/registry registry related issues label Mar 12, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

I don't get bridge for KIC when running with CNI, it tries to use kindnet...
But kubernetes gets started before, so there is a chicken-and-egg issue.

minikube start --driver docker --container-runtime containerd

📦 Preparing Kubernetes v1.20.2 on containerd 1.4.3 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring CNI (Container Networking Interface) ...

At boot:

docker@minikube:~$ ls /etc/cni/net.d/
100-crio-bridge.conf  200-loopback.conf  87-podman-bridge.conflist

After a while:

docker@minikube:~$ ls /etc/cni/net.d/
10-kindnet.conflist   200-loopback.conf
100-crio-bridge.conf  87-podman-bridge.conflist

Even after we remove the unused cri-o CNI, we still have podman CNI
(which isn't used by Kubernetes at all, since it is only used by Podman!)

So it gets the same problem as #10384

Currently we don't install /etc/cni/net.d/10-containerd-net.conflist
https://github.com/containerd/containerd/blob/master/script/setup/install-cni


For cri-o we did a hack for this, to restart the CRI after it had started up.
Another hack is to remove the CNI configuration while booting kube-proxy.

The best solution would be if Kubernetes had a flag to choose CNI name...

Instead of the current stupid rule:

If there are multiple CNI configuration files in the directory, the kubelet
uses the configuration file that comes first by name in lexicographic order.

https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni

So I guess we have to clear out the common system /etc/cni/net.d directory,
wait for Kubernetes + CNI to be be booted, and then restore the old directory again.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 19, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 1, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Dec 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/registry registry related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants