Skip to content

Controller: Correctly identify other pods on shutdown. #13444

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

DerekTBrown
Copy link

@DerekTBrown DerekTBrown commented May 29, 2025

What this PR does / why we need it:

  • Load balancer IP cleared from all ingresses when upgrading nginx-ingress-controller #11087 is a serious footgun: if a user upgrades the helm chart version in-place, or adds pod labels (either via the helm chart or some other mechanism), ingress-nginx may shutdown an ingress even though there are other ingress-nginx pods running and serving traffic.
  • The root cause of Load balancer IP cleared from all ingresses when upgrading nginx-ingress-controller #11087 is that ingress-nginx re-uses the set of pod labels on the terminating pod to find other pods belonging to the same controller group. This is a bug, since there are cases where we expect the set of labels on the terminating pod to differ from the labels on other pods belonging to the same controller group (for example, during an in-place helm upgrade, the chart version label will be different on old vs. new pods).
  • ingress-nginx already has an appropriate primitive to track the pods belonging to the same controller group: electionID. We expect a pod to tear down an ingress iff all pods with the same electionID have gone away (i.e. meaning there will be no future successful master election).

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • CVE Report (Scanner found CVE and adding report)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation only

This is a breaking change in two respects:

  1. I would argue all helm chart upgrades are currently breaking changes because of Load balancer IP cleared from all ingresses when upgrading nginx-ingress-controller #11087. This is dangerous for users until resolved.
  2. The PR enables this new behavior by default. I don't believe this is a breaking change for users, but would like other eyes to validate my assumption.

Which issue/s this PR fixes

How Has This Been Tested?

  • I have added unit tests to the helm chart to validate the new label is defined.
  • I have added unit tests to the status.go component to validate the pod finding behavior.

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I've read the CONTRIBUTION guide
  • I have added unit and/or e2e tests to cover my changes.
  • All new and existing tests passed.

Footnotes

  1. See: https://github.com/kubernetes/ingress-nginx/issues/11087#issuecomment-2712118775 2 3

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 29, 2025
Copy link

netlify bot commented May 29, 2025

Deploy Preview for kubernetes-ingress-nginx canceled.

Name Link
🔨 Latest commit eba4c5b
🔍 Latest deploy log https://app.netlify.com/projects/kubernetes-ingress-nginx/deploys/686592069063380008b2da20

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. area/helm Issues or PRs related to helm charts needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 29, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: DerekTBrown
Once this PR has been reviewed and has the lgtm label, please assign strongjz for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @DerekTBrown!

It looks like this is your first PR to kubernetes/ingress-nginx 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/ingress-nginx has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot requested a review from Gacko May 29, 2025 14:53
@k8s-ci-robot k8s-ci-robot added the needs-kind Indicates a PR lacks a `kind/foo` label and requires one. label May 29, 2025
@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority labels May 29, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @DerekTBrown. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label May 29, 2025
@DerekTBrown DerekTBrown requested a review from Gacko May 29, 2025 15:12
@DerekTBrown DerekTBrown marked this pull request as ready for review May 29, 2025 18:53
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 29, 2025
Copy link
Member

@Gacko Gacko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gonna review the Go code later.

/retitle Controller: Correctly identify other pods on shutdown.
/triage accepted
/kind feature
/priority backlog
/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 11, 2025
@k8s-ci-robot k8s-ci-robot changed the title [#11087] Determine if a given controller is the last pod using electionId label Controller: Correctly identify other pods on shutdown. Jun 11, 2025
@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jun 11, 2025
@k8s-ci-robot k8s-ci-robot removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Jun 11, 2025
@Gacko
Copy link
Member

Gacko commented Jun 11, 2025

What we are looking for here is basically a way to determine all the other pods targeted by either the Deployment or the DaemonSet, right? Why don't we hand in the selector labels used in the matchLabels property to the controller? I know this is not available from the pod details, but could be handed in as an argument. Because in some cases people want leader election to be disabled but still have features working which rely on identifying other pods, right?

@Gacko
Copy link
Member

Gacko commented Jun 11, 2025

Reviewing the code once again, I'm asking myself if the whole stuff in status.go is even called if there's not leader election. As in: The whole mess we currently have is around the status not being updated correctly on shutdown. For correctly updating the status on shutdown, the controller needs to have write access and only one controller pod should do that at the same time, so there needs to be a leader.

DerekTBrown and others added 5 commits July 2, 2025 13:04
Co-authored-by: Marco Ebert <marco_ebert@icloud.com>
Co-authored-by: Marco Ebert <marco_ebert@icloud.com>
@DerekTBrown
Copy link
Author

Reviewing the code once again, I'm asking myself if the whole stuff in status.go is even called if there's not leader election. As in: The whole mess we currently have is around the status not being updated correctly on shutdown. For correctly updating the status on shutdown, the controller needs to have write access and only one controller pod should do that at the same time, so there needs to be a leader.

As I understand it, status.go's primary purpose is to sync the ingress-nginx pod IP back to the ingress resource:

// statusSync keeps the status IP in each Ingress rule updated executing a periodic check
// in all the defined rules. To simplify the process leader election is used so the update
// is executed only in one node (Ingress controllers can be scaled to more than one)
// If the controller is running with the flag --publish-service (with a valid service)
// the IP address behind the service is used, if it is running with the flag
// --publish-status-address, the address specified in the flag is used, if neither of the
// two flags are set, the source is the IP/s of the node/s

I think we still need to do this even if we have a single pod / leader election disabled, since on initial ingress creation, the user will see the ingress go from having no IPs to having a single pod IP as a result of this function.

However, I agree we probably should have a separate abstraction for whether a given ingress-nginx pod is a leader. If leader-election is disabled, the pod should just assume its a leader.

@DerekTBrown
Copy link
Author

What we are looking for here is basically a way to determine all the other pods targeted by either the Deployment or the DaemonSet, right? Why don't we hand in the selector labels used in the matchLabels property to the controller? I know this is not available from the pod details, but could be handed in as an argument. Because in some cases people want leader election to be disabled but still have features working which rely on identifying other pods, right?

If leader election is disabled, the user needs to prevent multiple concurrent ingress-nginxes from running concurrently, otherwise concurrent modifications can happen. I thus assume the user is running only a single pod.

The existing + new logic works correctly under this assumption; no pods have the same election id, so on shutdown it treats itself as the last remaining pod.

@DerekTBrown DerekTBrown requested a review from Gacko July 2, 2025 20:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs area/helm Issues or PRs related to helm charts cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/feature Categorizes issue or PR as related to a new feature. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. priority/backlog Higher priority than priority/awaiting-more-evidence. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
3 participants