-
Notifications
You must be signed in to change notification settings - Fork 414
nginx-consul not running on kube workers #1346
Comments
@BrianHicks Is this related to the issue you saw with k8s stopping non-k8s containers? |
Yes, and exactly the same solution. I thought we had fixed this! |
I see this happening as well. On latest master had to manually start the container after the worker came up. |
It doesn't look like it is k8s that is stopping the service. Docker is being stopped as part of the flannel install and so nginx-consul dies:
I don't understand why |
Ah ha! There it is! That explains why I couldn't find this. I thought it was part of kubelet starting up. Would it be possible to just remove Flannel? Kubernetes works without it, doesn't it? |
My bad, it's not debug mode. The man page says |
Whhops! I commented on the wrong issue. The above comment should go in #1367 |
@BrianHicks as far as I know, Kubernetes will require some kind of custom setup to enable ip-per-pod. Will it work if we remove Flannel without implementing one of the other options? http://kubernetes.io/docs/admin/networking/ |
It actually should. I'll investigate and report back. |
since we don't yet have an explanation about why the systemd does not restart the service, we have a workaround implemented in #1394 It will be removed if we end up using an alternative to flannel. |
ansible --version
): 1.9.4python --version
): 2.7.6terraform version
): v0.6.11After a fresh build, consul Distributive Mantl Health Checks are failing on all kube workers:
On one worker:
Restart is set to always:
Starting nginx-consul manually on each worker seems to resolve the problem.
The text was updated successfully, but these errors were encountered: