New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
none: coredns CrashLoopBackOff: dial tcp ip:443: connect: no route to host #4350
Comments
|
This message isn't normal either:
Some other folks have similar coredns failures outside of minikube when the apiserver isn't available: kubernetes/kubernetes#75414 Why wouldn't the apiserver be available though? Here's one possible hint from kube-proxy:
This may be a red herring, but do you mind seeing what |
@fabstao I was facing the exact same issue on my CentOS VM. I got it fixed by following the instructions in this comment: kubernetes/kubeadm#193 (comment) to flush the iptables |
Good to know. We should update the error message for this failure to mention flushing iptables then. Thanks! |
It's not solved this problem... |
I got exactly the same issue too and was solved by @albinsuresh 's reply. |
Unfortunately, the fix in @albinsuresh 's reply is a work-around. Does anyone know what he true fix is if you're running a customized local firewall? I'll do some digging and post again if I find it. |
@slalomnut could you please provide logs from the newest minikube version ? both and also
in the latest version we provide a better logging |
and I wonder has anyone checked to see if this comment helps them ? (if the issue still exists with 1.3.1) kubernetes/kubeadm#193 (comment) |
I can confirm that it was a firewall issue on my side. |
This seems solved, but I will leave it open for anyone else who runs into this. |
use command |
Upgraded to latest v1.5.1 and seeing same issue but because of a different error now - Happening only on the v1.4.0 and above, when I switch back to v1.3.1 and use |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
In my case it was an issue with dashboard. if you have firewalld enabled you can add docker0 bridge interface to trusted zone which should allow docker containers to host communication
|
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Just adding my experience. I had the same problem. In my case it has been enough to add the Masquerading option in the default host outbound interface and then the communication started to work. |
I have this problem in prod env after runing some days, server in pod can not access outer network , exececute blow commands reloved, i want to know how happened before encounter this problem , and how to prevent this ; |
I was facing same problem One of the worker nodes had firewalld running, stopped it. It resolved the issue. |
how to fix permanently without disabling firewalld or any workaround? |
How can flushing or disabling the firewall be an accepted solution - this is disastrous. Please provide details on which firewall ports need to be opened and if any Kubernetes related interfaces (docker, flannel, ..) need to be assigned specific zones in order for CoreDNS to be able to connect to the API.
|
/reopen |
@tacerus: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Try:
Yes, after 6 years, that issue from time to time it just appears. |
I had this on Ubuntu aarch64 5.15.0-1045-oracle I did it was
and also checked https://microk8s.io/docs/troubleshooting#common-issues and did similar like But after I did similar to #4350 (comment),
I think I had some issue with iptables but not sure about that. |
The exact command to reproduce the issue:
kubectl get pods --all-namespaces
The full output of the command that failed:
The output of the
minikube logs
command:The operating system version:
The text was updated successfully, but these errors were encountered: