-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-proxy/ipvs] Protect Netlink calls with a mutex #72361
[kube-proxy/ipvs] Protect Netlink calls with a mutex #72361
Conversation
Hi @lbernail. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
I am open to discuss the two-sockets approach, but I think we should fix the dead-lock issue first. Obviously, this PR can avoid dead lock. |
/lgtm /approve Let's see... |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lbernail, m1093782566 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ipvsSvc, err := runner.ipvsHandle.GetService(svc) | ||
runner.mu.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use 'defer' ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could get the lock at the beginning of each function and use defer everywhere to make everything consistent. I chose to release the lock as quickly as possible. We do not need to keep the lock to call toVirtualServer
in GetVirtualServer
for instance.
I'm completely open to changing this if it makes the code easier to read and the optimization is not worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if GetService() encounters error ?
I am yet to get familiar with Go lang. Not sure if the unlocking would be skipped in some situation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error is returned as a value, and only inspected after unlocking, so the mutex will be released in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True.
…1-upstream-release-1.12 Automated cherry pick of #72361 upstream release 1.12
…1-upstream-release-1.11 Automated cherry pick of #72361 upstream release 1.11
…1-upstream-release-1.13 Automated cherry pick of #72361 upstream release 1.13
What type of PR is this?
/kind bug
What this PR does / why we need it:
We have a race condition between the proxier and the gracefulTerminationManager goroutines: when both use the netlink socket at the same time we can end up in a deadlock. This PR protects netlink calls with a mutex to make sure a single goroutine is using the socket at any given time.
An alternative would be to use two different sockets but it is probably safer to avoid parallel netlink calls.
Not sure if this is the best design, happy to discuss it.
Which issue(s) this PR fixes:
Fixes #71071
Special notes for your reviewer:
Currently being tested by users that reported the issue (more details in #71071)
Does this PR introduce a user-facing change?:
/sig network
/area ipvs
/assign @m1093782566