-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1810574: CNI: Confirm pods in cache before connecting #244
Conversation
In highly distributed environment like Kubernetes installation with Kuryr, we need to plan for network outages in any case. If we don't, we end up with bugs like one this patch tries to fix. If we'd lose a Pod delete event on kuryr-daemon following can happen: 1. Pod A of name "foo" gets created. 2. It gets annotated normally and CNI ADD request gives it an IP X. 3. Pod A gets deleted. 4. Somehow the delete event gets lost on kuryr-daemon's watcher. 5. CRI sends CNI DEL request and pod gets unplugged successfully. It never gets deleted from the daemon's registry, because we never got the Pod delete event from K8s API. 6. Pod B of the same name "foo" gets created. 7. CNI looks up registry by <namespace>/<pod>, finds old VIF there and plugs pod B with pod A's VIF X. 8. kuryr-controller never notices that and assigns IP X to another pod. 9. We get an IP conflict. To solve the issue this patch makes sure that when handling ADD CNI calls, we always get the pod from K8s API first, and if uid of the API one doesn't match the one in the registry, we remove the registry entry. That way we can make sure the pod we've cached isn't stale. This adds one K8s API call per CNI ADD request, which is a significant load increase, but hopefully the K8s API can handle it. Closes-Bug: 1854928 Change-Id: I9916fca41bd917d85be973b8625b65a61139c3b3
@dulek: This pull request references Bugzilla bug 1810574, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dulek, luis5tb The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@dulek: All pull requests linked via external trackers have merged: openshift/kuryr-kubernetes#179, openshift/kuryr-kubernetes#244. Bugzilla bug 1810574 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.4 |
@luis5tb: new pull request created: #246 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In highly distributed environment like Kubernetes installation with
Kuryr, we need to plan for network outages in any case. If we don't, we
end up with bugs like one this patch tries to fix.
If we'd lose a Pod delete event on kuryr-daemon following can happen:
To solve the issue this patch makes sure that when handling ADD CNI calls, we
always get the pod from K8s API first, and if uid of the API one doesn't match
the one in the registry, we remove the registry entry. That way we can make
sure the pod we've cached isn't stale. This adds one K8s API call per CNI ADD
request, which is a significant load increase, but hopefully the K8s API can
handle it.