Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod readiness gates #955

Merged

Conversation

alfredkrohmer
Copy link
Contributor

@alfredkrohmer alfredkrohmer commented Jun 18, 2019

This adds a feature to set the status of pod readiness gates on pods that are registered with an ALB. See added documentation for details of implementation.

Todo:

  • fix up unit tests, add test cases for pod readiness gates
  • test manually

This closes #905.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress cncf-cla: yes labels Jun 18, 2019
@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Jun 18, 2019

Hi @devkid. Thanks for your PR.

I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test size/L labels Jun 18, 2019
@k8s-ci-robot k8s-ci-robot requested review from bigkraig and M00nF1sh Jun 18, 2019
@fejta-bot
Copy link

@fejta-bot fejta-bot commented Sep 16, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale label Sep 16, 2019
@alfredkrohmer
Copy link
Contributor Author

@alfredkrohmer alfredkrohmer commented Sep 17, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale label Sep 17, 2019
@fejta-bot
Copy link

@fejta-bot fejta-bot commented Dec 16, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale needs-rebase labels Dec 16, 2019
@lelikg
Copy link

@lelikg lelikg commented Jan 13, 2020

+1 on this feature, we'd love to get it

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Feb 12, 2020

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten and removed lifecycle/stale labels Feb 12, 2020
@runningman84
Copy link

@runningman84 runningman84 commented Feb 12, 2020

/remove-lifecycle

@runningman84
Copy link

@runningman84 runningman84 commented Feb 12, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten label Feb 12, 2020
…eady` when it's healthy and taking traffic from ALB
@alfredkrohmer alfredkrohmer force-pushed the feature/pod-readiness-gate branch from d8befc9 to b134750 Compare Feb 19, 2020
@k8s-ci-robot k8s-ci-robot added size/XL and removed needs-rebase size/L labels Feb 19, 2020
@M00nF1sh
Copy link
Collaborator

@M00nF1sh M00nF1sh commented Feb 26, 2020

@devkid
Thanks sooo much for implementing this. I'll dive into the code today. some high-level thoughts would like to discuss:

  1. should we simplify to use a single readiness gate instead of per targetGroup? (personally I favor your current approach)
    pros: easy for user usage before we have mutating webhook to automatically inject it
    pros: (i'm not sure whether there is an physical limit for number of gates can be specified per pod)
    cons: harder to implement. Needs to bookkeeping all targetGroup status.
    cons: bad visibility from the pod.(like when debug an blocked pod)
    cons: cannot opt-out readiness gate for specific targetGroup.(not sure whether there is a use case for this)
  2. should we have a timeout setting like 5 minutes. If pod didn't become healthy within 5 minute, unblock the readiness gate. (to prevent deployment lock for wrong settings, e.g. security groups).

internal/alb/tg/targethealth.go Outdated Show resolved Hide resolved
@alfredkrohmer
Copy link
Contributor Author

@alfredkrohmer alfredkrohmer commented Feb 27, 2020

1. should we simplify to use a single readiness gate instead of per targetGroup?  (personally I favor your current approach)

I think we should start with the per-target group approach. We can add the per-ingress functionality later if required.

2. should we have a timeout setting like 5 minutes. If pod didn't become healthy within 5 minute, unblock the readiness gate. (to prevent deployment lock for wrong settings, e.g. security groups).

I don't think we should have a timeout for this. The whole idea of readiness gates is that they should block deployments if the pods don't get registered in the load balancer (for whatever reason). Progressing with deployment when the readiness gate is not ok yet will most likely bring the service down eventually (e.g. when – for any reason – the ALB ingress controller is down).

@M00nF1sh
Copy link
Collaborator

@M00nF1sh M00nF1sh commented Feb 27, 2020

overall looks good to me 👍 just we should remove the target-health-reconciliation-strategy.

* remove reconciliation strategy (always use `initial`)
* remove reconciliation interval (use healthcheckIntervalSeconds * healthyThresholdCount initially, then only healthcheckIntervalSeconds)

Also:
* include ALB target health description in the `Reason` field of pod condition
* initialize and pass the TargetHealthController in TargetGroupController instead of TargetsController (one level above)
@alfredkrohmer
Copy link
Contributor Author

@alfredkrohmer alfredkrohmer commented Feb 28, 2020

@M00nF1sh removed the strategy and interval. It uses now healthCheckInterval * healthCheckThreshold for the first time.After and afterwards only healthCheckInterval.

@alfredkrohmer
Copy link
Contributor Author

@alfredkrohmer alfredkrohmer commented Mar 3, 2020

@M00nF1sh is there anything else that prevents from merging? Or do you need to coordinate this with the v2 work?

@M00nF1sh
Copy link
Collaborator

@M00nF1sh M00nF1sh commented Mar 3, 2020

@devkid
There isn't, and we should get this into v1.1.6 🤣 . Let me do a test for the code logic and ship this by EOD.

internal/alb/tg/targethealth.go Outdated Show resolved Hide resolved
internal/alb/tg/targethealth.go Outdated Show resolved Hide resolved
Copy link
Collaborator

@M00nF1sh M00nF1sh left a comment

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm label Mar 9, 2020
@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Mar 9, 2020

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: devkid, M00nF1sh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved label Mar 9, 2020
@M00nF1sh
Copy link
Collaborator

@M00nF1sh M00nF1sh commented Mar 9, 2020

/woof

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Mar 9, 2020

@M00nF1sh: dog image

In response to this:

/woof

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot merged commit 1aba9e5 into kubernetes-sigs:master Mar 9, 2020
6 checks passed
if err != nil {
continue
}

Copy link

@nirnanaaa nirnanaaa Mar 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@M00nF1sh @devkid shouldn't we also check if DeletionTimestamp is != nil? because pods, that are in state terminating can still be found in the store, but should already be set into state "draining" on the TG, right? I can still see quite a substantial amount of 5xx errors (502, 504) in our testcase, when the pods are actually receiving their sigterm signal

Copy link
Contributor Author

@alfredkrohmer alfredkrohmer Mar 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pods in Terminating state appear neither in Addresses nor in NotReadyAddresses.

Copy link

@nirnanaaa nirnanaaa Mar 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤷‍♂️ for us they do, that's why I mentioned it here. Using EKS 1.14 and publishNotReadyAddresses on the service

Copy link
Contributor Author

@alfredkrohmer alfredkrohmer Mar 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's weird. I actually thought about implementing the check for DeletionTimestamp != nil but I verified on our end that terminating pods do not appear in NotReadyAddresses. Feel free to add the check. I'll have another look at this on our end as well next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved cncf-cla: yes lgtm ok-to-test size/XXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants