-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No context found by kubectl
when starting cluster with kind create cluster
#2174
Comments
I just found a workaround by appending the output of
to |
It should happen automatically. Is your kubeconfig not owned by the current user? |
Also We should print an error / warning if this happens as well, if kind doesn't that's a bug in itself though we still need you to correct the permissions on the file or specify a different file to use. |
In my case |
What about the directory? |
Are there any stray lock files? |
My I am currently executing all docker related commands as root user so this should not be the problem. |
Ah, that explains it. kubernetes has rules about where kubeconfig is written, it will default to under the current user. if you run |
you can do one of:
|
I solved the issue by adding my user to the docker group (to run docker without root privileges) and running |
kubectl config set-context kind-kind |
After starting the kind cluster with
kind create cluster
I tried connecting to the cluster withkubectl cluster-info --context kind-kind
.Unfortuntaly
kubectl
gave me the following error.Inspecting the contexts available to
kubectl
I only got my minikube instance.The error persisted even after removing minikube and clearing
$HOME/.kube/config
.I am using
Ubuntu 20.04,
kind version 0.10.0,
kubectl client version 1.20.5,
kubectl server version 1.20.2
The text was updated successfully, but these errors were encountered: