Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod readiness gates #955

Merged

Conversation

@devkid
Copy link
Contributor

devkid commented Jun 18, 2019

This adds a feature to set the status of pod readiness gates on pods that are registered with an ALB. See added documentation for details of implementation.

Todo:

  • fix up unit tests, add test cases for pod readiness gates
  • test manually

This closes #905.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 18, 2019

Hi @devkid. Thanks for your PR.

I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Sep 16, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@devkid

This comment has been minimized.

Copy link
Contributor Author

devkid commented Sep 17, 2019

/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 16, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@lelikg

This comment has been minimized.

Copy link

lelikg commented Jan 13, 2020

+1 on this feature, we'd love to get it

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Feb 12, 2020

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@runningman84

This comment has been minimized.

Copy link

runningman84 commented Feb 12, 2020

/remove-lifecycle

@runningman84

This comment has been minimized.

Copy link

runningman84 commented Feb 12, 2020

/remove-lifecycle rotten

…eady` when it's healthy and taking traffic from ALB
@devkid devkid force-pushed the devkid:feature/pod-readiness-gate branch from d8befc9 to b134750 Feb 19, 2020
@devkid

This comment has been minimized.

Copy link
Contributor Author

devkid commented Feb 27, 2020

1. should we simplify to use a single readiness gate instead of per targetGroup?  (personally I favor your current approach)

I think we should start with the per-target group approach. We can add the per-ingress functionality later if required.

2. should we have a timeout setting like 5 minutes. If pod didn't become healthy within 5 minute, unblock the readiness gate. (to prevent deployment lock for wrong settings, e.g. security groups).

I don't think we should have a timeout for this. The whole idea of readiness gates is that they should block deployments if the pods don't get registered in the load balancer (for whatever reason). Progressing with deployment when the readiness gate is not ok yet will most likely bring the service down eventually (e.g. when – for any reason – the ALB ingress controller is down).

@M00nF1sh

This comment has been minimized.

Copy link
Collaborator

M00nF1sh commented Feb 27, 2020

overall looks good to me 👍 just we should remove the target-health-reconciliation-strategy.

devkid added 2 commits Feb 28, 2020
* remove reconciliation strategy (always use `initial`)
* remove reconciliation interval (use healthcheckIntervalSeconds * healthyThresholdCount initially, then only healthcheckIntervalSeconds)

Also:
* include ALB target health description in the `Reason` field of pod condition
* initialize and pass the TargetHealthController in TargetGroupController instead of TargetsController (one level above)
@devkid

This comment has been minimized.

Copy link
Contributor Author

devkid commented Feb 28, 2020

@M00nF1sh removed the strategy and interval. It uses now healthCheckInterval * healthCheckThreshold for the first time.After and afterwards only healthCheckInterval.

@devkid

This comment has been minimized.

Copy link
Contributor Author

devkid commented Mar 3, 2020

@M00nF1sh is there anything else that prevents from merging? Or do you need to coordinate this with the v2 work?

@M00nF1sh

This comment has been minimized.

Copy link
Collaborator

M00nF1sh commented Mar 3, 2020

@devkid
There isn't, and we should get this into v1.1.6 🤣 . Let me do a test for the code logic and ship this by EOD.

Copy link
Collaborator

M00nF1sh left a comment

/lgtm
/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Mar 9, 2020

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: devkid, M00nF1sh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@M00nF1sh

This comment has been minimized.

Copy link
Collaborator

M00nF1sh commented Mar 9, 2020

/woof

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Mar 9, 2020

@M00nF1sh: dog image

In response to this:

/woof

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot merged commit 1aba9e5 into kubernetes-sigs:master Mar 9, 2020
6 checks passed
6 checks passed
cla/linuxfoundation devkid authorized
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
pull-aws-alb-ingress-controller-e2e-test Job succeeded.
Details
pull-aws-alb-ingress-controller-lint Job succeeded.
Details
pull-aws-alb-ingress-controller-unit-test Job succeeded.
Details
tide In merge pool.
Details
if err != nil {
continue
}

This comment has been minimized.

Copy link
@nirnanaaa

nirnanaaa Mar 10, 2020

Contributor

@M00nF1sh @devkid shouldn't we also check if DeletionTimestamp is != nil? because pods, that are in state terminating can still be found in the store, but should already be set into state "draining" on the TG, right? I can still see quite a substantial amount of 5xx errors (502, 504) in our testcase, when the pods are actually receiving their sigterm signal

This comment has been minimized.

Copy link
@devkid

devkid Mar 11, 2020

Author Contributor

Pods in Terminating state appear neither in Addresses nor in NotReadyAddresses.

This comment has been minimized.

Copy link
@nirnanaaa

nirnanaaa Mar 11, 2020

Contributor

🤷‍♂ for us they do, that's why I mentioned it here. Using EKS 1.14 and publishNotReadyAddresses on the service

This comment has been minimized.

Copy link
@devkid

devkid Mar 11, 2020

Author Contributor

That's weird. I actually thought about implementing the check for DeletionTimestamp != nil but I verified on our end that terminating pods do not appear in NotReadyAddresses. Feel free to add the check. I'll have another look at this on our end as well next week.

M00nF1sh added a commit to M00nF1sh/aws-alb-ingress-controller that referenced this pull request Mar 22, 2020
M00nF1sh added a commit to M00nF1sh/aws-alb-ingress-controller that referenced this pull request Mar 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

7 participants
You can’t perform that action at this time.