New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1825355: node/vnids: Correctly handle case where NetNamespace watch is far behind #134
Bug 1825355: node/vnids: Correctly handle case where NetNamespace watch is far behind #134
Conversation
@squeed: This pull request references Bugzilla bug 1825355, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@JacobTanenbaum @danwinship you've both touched this recently |
8eae9c1
to
a97f507
Compare
@@ -119,7 +119,7 @@ func (vmap *nodeVNIDMap) WaitAndGetVNID(name string) (uint32, error) { | |||
return 0, fmt.Errorf("failed to find netid for namespace: %s, %v", name, err) | |||
} | |||
klog.Warningf("Netid for namespace: %s exists but not found in vnid map", name) | |||
vmap.setVNID(netns.Name, netns.NetID, netnsIsMulticastEnabled(netns)) | |||
vmap.handleAddOrUpdateNetNamespace(netns, nil, watch.Added) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's legitimate to call handleAddOrUpdateNetNamespace
from here. In fact, it's definitely not, as seen by the fact that you had to change a bunch of other places to make it work. But we can't just change places that call WaitAndGetVNID
to call getVNID
instead and expect everything will keep working.
Maybe the fix is to just remove the setVNID
call here. Though I think if we were going to do that I'd want to make the backoff shorter...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had considered that. We cache NetNamespaces in two places (networkpolicy.go and vnids.go), and doing that would make those caches diverge. It's not clear what the implication of such a divergence is, given that the code is so tightly coupled.
I actually think it's an error for networkPolicyPlugin.initNamespaces()
to call WaitAndGetVNID()
, because we're still in startup and haven't even added our handlers yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarfiy, the flow in npp.Start()
is:
vmap.Start()
, which callsvmap.populateVNIDs()
, which does a synchronous List and callsvmap.setVNID()
npp.initNamespaces()
, which does a synchronous List- The rest of the informers are configured
Not the prettiest. So I'm not surprised we have deadlocks. But that's why I think it's wrong to call an informer handler before we "expect" to see informers running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but you need to separate out the startup time vs non-startup time behavior more. Currently, at startup the behavior is:
- if a VNID is missing from the cache, pointlessly wait 5 seconds, then fetch it manually
With this patch, it becomes
- if a VNID is missing from the cache, abort openshift-sdn startup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- if a VNID is missing from the cache, abort openshift-sdn startup
We swallow errors (and always have), so that's not a risk.
When adding a pod, if the NetNamespace isn't found, we'll issue a GET directly to the apiserver and treat it as an ADD. Except we didn't actually handle it correctly, and caused NetworkPolicy to ignore this NetNS forever. Fixes: rhbz 1825355
a97f507
to
b5f89a6
Compare
@danwinship I switched the locking around a bit, to make the difference between startup and running clearer. |
@danwinship any final thoughts on this? |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: danwinship, squeed The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
@squeed: All pull requests linked via external trackers have merged: openshift/sdn#134. Bugzilla bug 1825355 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is a backport of openshift#134 When adding a pod, if the NetNamespace isn't found, we'll issue a GET directly to the apiserver and treat it as an ADD. Except we didn't actually handle it correctly, and caused NetworkPolicy to ignore this NetNS forever. Fixes: rhbz 1839107
Backport of openshift#134 When adding a pod, if the NetNamespace isn't found, we'll issue a GET directly to the apiserver and treat it as an ADD. Except we didn't actually handle it correctly, and caused NetworkPolicy to ignore this NetNS forever. Fixes: rhbz 1389109
When adding a pod, if the NetNamespace isn't found, we'll issue a GET directly to the apiserver and treat it as an ADD. Except we didn't actually handle it correctly, and caused NetworkPolicy to ignore this NetNS forever.
Fixes: rhbz 1825355