New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One of my ingress or ALB returned 504 when one of my EKS nodes went down #993
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I had a big issue today when out of the sudden one of my eks nodes stopped working (aws even marked it for retirement).
Hopefully i have made the setup for cluster autoscaling so another node was put in the cluster, and new pods were created. Sadly one of my load balancer was not working and returning 504 while the old pods were still there (with state unknown) before I stopped the troubled node (EC2 instance).
I tried reading ALB logs, and ingress controller logs with no apparent luck for any clues of what exactly happened. The funny thing is that I have 6 ingress, 4 of them were pointing to services with pods only on the working nodes, and two of them have mixed pods (working nodes and failing node) but only one of them was returning 504.
In other topic, the pod with the ingress controller was also in the failing node, would it be a good idea to use a DaemonSet for this? After this outage today I'm starting to think in hosting all my services using DaemonSet to ensure no downtime when it comes to one node failing.
If you need more info I can send you some logs I collected privately or try to redact sensitive information, but as far as I can tell there's no clue of what happened.
The text was updated successfully, but these errors were encountered: