Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl: Unable to connect to the server: dial tcp i/o timeout #1234

Closed
norbert-yoimo opened this issue Mar 10, 2017 · 20 comments
Closed

kubectl: Unable to connect to the server: dial tcp i/o timeout #1234

norbert-yoimo opened this issue Mar 10, 2017 · 20 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@norbert-yoimo
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Minikube version (use minikube version): minikube version: v0.17.1
Environment:

  • OS (e.g. from /etc/os-release): Arch linux
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v1.0.7.iso
  • Install tools:
  • Others:

What happened:
$ ./minikube-linux-amd64 start

Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

$ kubectl get pods

Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout

What you expected to happen:
Get an empty list of pods

How to reproduce it (as minimally and precisely as possible):

Anything else do we need to know:
Output of minikube ssh systemctl status localkube:

● localkube.service - Localkube
   Loaded: loaded (/lib/systemd/system/localkube.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2017-03-06 14:09:08 UTC; 14min ago
     Docs: https://github.com/kubernetes/minikube/tree/master/pkg/localkube
 Main PID: 3271 (localkube)
    Tasks: 16 (limit: 4915)
   Memory: 141.8M
      CPU: 1min 54.669s
   CGroup: /system.slice/localkube.service
           ├─3271 /usr/local/bin/localkube --generate-certs=false --logtostderr=true --enable-dns=false --node-ip=192.168.99.100 --apiserver-name=minikubeCA
           └─3350 journalctl -k -f

Mar 06 14:19:12 minikube localkube[3271]: I0306 14:19:12.296016    3271 replication_controller.go:322] Observed updated replication controller kubernetes-dashboard. Desired pod count change: 1->1
Mar 06 14:19:12 minikube localkube[3271]: I0306 14:19:12.296150    3271 replication_controller.go:322] Observed updated replication controller kube-dns-v20. Desired pod count change: 1->1
Mar 06 14:19:17 minikube localkube[3271]: I0306 14:19:17.895294    3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").
Mar 06 14:20:30 minikube localkube[3271]: I0306 14:20:30.883835    3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/836b1214-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "836b1214-0276-11e7-9885-0800272e2447" (UID: "836b1214-0276-11e7-9885-0800272e2447").
Mar 06 14:20:47 minikube localkube[3271]: I0306 14:20:47.820727    3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").
Mar 06 14:21:02 minikube localkube[3271]: apply entries took too long [11.765144ms for 1 entries]
Mar 06 14:21:02 minikube localkube[3271]: avoid queries with large range/delete range!
Mar 06 14:21:09 minikube localkube[3271]: E0306 14:21:09.826764    3271 repair.go:132] the node port 30000 for service kubernetes-dashboard/kube-system is not allocated; repairing
Mar 06 14:21:57 minikube localkube[3271]: I0306 14:21:57.895042    3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/836b1214-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "836b1214-0276-11e7-9885-0800272e2447" (UID: "836b1214-0276-11e7-9885-0800272e2447").
Mar 06 14:21:58 minikube localkube[3271]: I0306 14:21:58.899802    3271 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/8357ed5c-0276-11e7-9885-0800272e2447-default-token-77nfr" (spec.Name: "default-token-77nfr") pod "8357ed5c-0276-11e7-9885-0800272e2447" (UID: "8357ed5c-0276-11e7-9885-0800272e2447").

If I SSH in, I can see that there is 1 failed systemd unit, no idea if it matters or not:

$ systemctl status systemd-networkd-wait-online.service
● systemd-networkd-wait-online.service - Wait for Network to be Configured
   Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2017-03-06 14:09:08 UTC; 18min ago
     Docs: man:systemd-networkd-wait-online.service(8)
  Process: 3241 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=1/FAILURE)
 Main PID: 3241 (code=exited, status=1/FAILURE)

Mar 06 14:07:08 minikube systemd[1]: Starting Wait for Network to be Configured...
Mar 06 14:07:08 minikube systemd-networkd-wait-online[3241]: ignoring: lo
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Mar 06 14:09:08 minikube systemd[1]: Failed to start Wait for Network to be Configured.
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Unit entered failed state.
Mar 06 14:09:08 minikube systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'.
@r2d4 r2d4 added the kind/bug Categorizes issue or PR as related to a bug. label Mar 12, 2017
@donspaulding
Copy link

@norbert-yoimo I think I'm seeing a similar issue to you. Is the kube API server flapping up and down? I mean, are you able to run kubectl get pods at some times, and then within a few seconds, it returns the dial tcp error?

@norbert-yoimo
Copy link
Author

I just tried a few times, but I could never get it to connect.

@aaron-prindle
Copy link
Contributor

Can you post the output of minkube ssh journalctl and minikube logs?

@bvdeenen
Copy link

bvdeenen commented Mar 20, 2017

Hi, I have the same issue

minikube.log.txt
journalctl.txt

My OS is Void Linux, with VirtualBox 5.1.14.

Oh I checked that from the vm (via minikube ssh) I can access the world:

docker run -ti alpine  /bin/sh

And both from the vm and from the just started alpine docker.

curl https://www.xs4all.nl/index.html

@gtirloni
Copy link
Contributor

The failed systemd-networkd-wait-online.service is a red herring for this issue. See #1277.

@norbert-yoimo
Copy link
Author

Tried again with 0.18.0, still the same problem, attaching the requested files:

journalctl
logs

@vadimio
Copy link

vadimio commented Apr 20, 2017

Same problem

@RaananHadar
Copy link

same problem on ubuntu 17.04

@david1983
Copy link

david1983 commented May 5, 2017

@RaananHadar I found the solution for ubuntu 17.04 the problem is in the docker version in the Ubuntu repo, you need to uninstall docker docker-engine and install docker-ce.
Here are the steps that I've done:

$sudo apt-get remove docker docker-engine

follow the steps to install docker-ce but you need to change the sources from zesty to xenial because the repo for zesty does not exist yet (tricky bit).

$ sudo apt-get update

$ sudo apt-get install
linux-image-extra-$(uname -r)
linux-image-extra-virtual

$ sudo apt-get install
apt-transport-https
ca-certificates
curl
software-properties-common

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

$ sudo apt-key fingerprint 0EBFCD88
the key should be equal to this : 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88.

IMPORTANT!!!!
add the repository using xenial instead of zesty
$ sudo add-apt-repository
"deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"

$ sudo apt-get update

$ sudo apt-get install docker-ce

at this point run minikube
$ minikube start

and check the status
$minikube status
minikubeVM: Running
localkube: Running

Now you should be able to run kubectl cluster-info

@mdmarek
Copy link

mdmarek commented May 23, 2017

@david1983 I tried the instructions above but in the end got the same network error, like @RaananHadar I was trying this on Ubuntu 17.04.

journalctl
logs

@jpmolinamatute
Copy link

same problem here :-(

@dfredell
Copy link

For others that run into this problem. #1224 (comment) solved it for me.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 31, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@editaxz
Copy link

editaxz commented Apr 22, 2019

/reopen
Hi, I had this issue:
2019-04-21-205332_928x83_scrot

Then I check the minikube: "minikubeis not running"
2019-04-21-205450_738x169_scrot

Then I restarted minikube and It started to works.
2019-04-21-205518_927x533_scrot

I hope this will be useful to us.

@jturi
Copy link

jturi commented Feb 4, 2020

In my case use-context was not set:

Use this if you have kubernetes running with Docker for Desktop:

kubectl config use-context docker-for-desktop

Use this if you have kubernetes running with minikube:

kubectl config use-context minikube

If you run into problems with Minikube, the best is to remove and start it over again:

minikube stop; minikube delete
rm /usr/local/bin/minikube
rm -rf ~/.minikube
minikube start

@Gauravjaitly
Copy link

In my case, my minikube wasnt active. had to start it by "minikube start"

@priyawadhwa priyawadhwa reopened this Sep 30, 2020
@priyawadhwa
Copy link

Hey @editaxz did restarting the cluster via

minikube delete
minikube start

fix it for you? I'd suggest upgrading to the latest version of minikube, v1.13.1, as well.

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 30, 2020
@tstromberg tstromberg changed the title Unable to connect to the server: dial tcp i/o timeout kubectl: Unable to connect to the server: dial tcp i/o timeout Oct 7, 2020
@tstromberg
Copy link
Contributor

Closing as this is due to speaking to a stopped cluster, but opened #9410 to make this less confusing for users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests