New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS ELB drops node out of service after a deploy #68631

Open
ricktbaker opened this Issue Sep 13, 2018 · 2 comments

Comments

Projects
None yet
3 participants
@ricktbaker

ricktbaker commented Sep 13, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Have a service of type loadbalancer in AWS, all installed via helm. Everything works as it should, pod comes up as in service on the ELB as expected. Periodically when I do a helm upgrade a new pod is created and it goes out of service. I can watch tcpdump and see the ICMP healthcheck request going to the pod, but it is never returned, so the pod is set to outofservice on the ELB. If I kill the pod and a new one is created it will then start responding to the healthcheck ICMP request and comes up

What you expected to happen:

I would expect the replacement pod to respond to healthchecks and keep the pod in service on the ELB. The readiness check is working as expected, I can watch the logs in the pod and see the readiness check come in and execute as expected, so the pod is working, it's just not responding to the healtchecks from the ELB.

How to reproduce it (as minimally and precisely as possible):

  • Create AWS EKS cluster
  • Deploy a pod and service of type load balancer with helm
  • Watch pod come up as in service on ELB
  • Change something so the pod gets recreated when a helm upgrade is applied
  • Watch pod go out of service on the ELB
  • Kill pod, let it be recreated by the deployment, and watch pod come back into service on the ELB.

Anything else we need to know?:

Environment: Latest AWS EKS

@ricktbaker

This comment has been minimized.

Show comment
Hide comment
@ricktbaker

ricktbaker commented Sep 13, 2018

/sig aws

@k8s-ci-robot k8s-ci-robot added sig/aws and removed needs-sig labels Sep 13, 2018

@tndhl

This comment has been minimized.

Show comment
Hide comment
@tndhl

tndhl Sep 18, 2018

+1, but without helm

tndhl commented Sep 18, 2018

+1, but without helm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment