The connection to the server 172.17.4.101:443 was refused #479
Comments
@dixudx It can take some time for the services to come up. Can you get the log of the kubelet? journalctl -u kubelet? |
Getting the same error when following the tutorial.
May 13 14:10:11 localhost kubelet-wrapper[1713]: image: using image from file /usr/share/rkt/stage1-fly.aci
May 13 14:10:11 localhost kubelet-wrapper[1713]: run: open /usr/share/rkt/stage1-fly.aci.asc: no such file or directory
May 13 14:10:11 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
May 13 14:10:11 localhost systemd[1]: kubelet.service: Unit entered failed state.
May 13 14:10:11 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 13 14:10:21 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
|
@dixudx: I had the same problem today even though my cluster configuration (running in AWS) was working fine. It seems like a problem with the latest CoreOS alpha AMI (CoreOS-alpha-1045.0.0, ami-46ff772a). Downgrading to previous AMI (CoreOS-alpha-1032.1.0, ami-8d74fde1) did the trick for me. Since you are using the same CoreOS version I was, maybe you are experiencing the same problem. Try using an earlier version. |
Running this in vagrant, same issue with the alpha-1032.1.0 |
This is due to an issue in the latest CoreOS alpha: coreos/bugs#1282 @AlmaasAre You are seeing the same error output in v1032.1.0? Can you confirm os version by running |
The version of CoreOS is |
@philips I destroyed all the instances and rebuilt them. The same errors still occurred.
Actually |
@aaronlevy I have the similar with alpha-1032.1.0
Edit: I am using Vagrant and the problem is the same with 1029.0.0 & 1032.0.0 |
My failure was due to launching from windows and dos cariage return screwing up the install scripts, I ran dos2unix on the files and had no failed services on vagrant up. However none of my services actually started but I think that is unrelated to this issue. |
beta channel (1010.3.0) works for me. alpha doesn't.
|
@dixudx Works with 1032.1.0 now, just took a while to boot up :) |
The same issue still occurred on latest version
|
@AlmaasAre what do you mean taking a while to boot up on version
From the above log, the service |
@dixudx Sorry, the kubelet service took a while to start. Suddenly it started working again. Might've been a restart of the VM that solved it not sure. Works with 1032.1.0 now |
Same issue here:
|
Also with newer box 1053.2.0, still an issue:
|
i tryed to vagrant up with 1010.5.0 and 1053.2.0 .... no luck. localbox$ kubectl get nodes cannot find the error. journal -f shows a loop while downloading/trying to start flannel. |
This seems to be touching on a few different issues. I'll try and touch on them, but if you are still having an issue, please open a new ticket (at this point this issue covers a few different issues). Originally, there was an error where the rkt-fly stage1 could not be found (coreos/bugs#1282) -- this should be resolved in all alpha/beta/stable channels. the "update-engine.service" failed unit is actually because we disable auto-updates to slightly improve the user-experience of not having a server reboot immediately on first boot. But nothing is broken. @wmangano would you mind opening a new issue if you are still having problems? It might have to do with flannel not being able to contact etcd -- but if it's still and issue we can dig further. |
for those who just stumble onto this issue, the downloads do often take a while as discussed above. to confirm that your downloads are still in transit, you can:
|
Just spent a bunch of time hunting this down as well. Could some output be put onto stdout/console so that this isn't such a mystery to the uninitiated? I think it would save a lot of people some trouble and hunting. I think it would be very helpful. Thanks, |
Another option might just to put a blocking loop at the end of the vagrant script which waits for the api-server to be available (and outputs something like "waiting for api..." |
Opened #806 to track this |
I think a lot of user are using coreos-kubernetes so why nobody there would update the configs to help us? |
I followed the doc Kubernetes Installation with Vagrant & CoreOS to deploy a Kubernetes env. After the deployment using
vagrant up
, I can't access the cluster.After ssh into c1,
No service is running on port 443.
And I've already updated the git repo to the latest commit 03f86ac
The text was updated successfully, but these errors were encountered: