-
Notifications
You must be signed in to change notification settings - Fork 691
Killing machine doesn't migrate the pod #6
Comments
Ha. Ok, so I stand corrected. It does work. It just takes really long, it took 13 minutes to figure out that the machine was gone. That's seems far too long, but I imagine there is some way I can over-ride the default heartbeat timeout. You can close this issue if you'd like. |
@ehacke I believe this is a Kubernetes issue that I've discussed earlier, but I've tried to look for it without success. Can you open a new issue there and ref this one, please? |
@pires Yes, will do. |
Thank you, @ehacke. It seems I was right. Let's keep in touch and close this whenever the fix is merged. |
@pires yeah it looks like they are fixing it fairly quickly. Not certain if my original 13 minute pod rejection issue is real/reproducible, but it does look like they are fixing the erroneous pod status reporting. |
I'm running your elasticsearch cluster project on top of your kubernetes coreos vagrant project.
First of all, thanks for both of these, they work great and have saved me a lot of work.
However, it looks like if I deploy the elasticsearch cluster on top of 1 master and 4 minions, and then kill one of the machines running an elasticsearch node, kubernetes does not migrate the pod to another machine. In fact, it never even seems to notice that the machine is dead. If I run
kubectl get pods
it still shows the pod as "running", even though the machine is gone.I've tried updating to kubernetes 0.12.1, and tried killing different elasticsearch machines, and it doesn't seem to make a difference.
Also, if I just kill the docker container on the machine, but leave the machine up, kubernetes will notice and move the pod as expected.
Any insight into why this is happening? Am I missing something?
Or should I take this up with the kubernetes team?
The text was updated successfully, but these errors were encountered: