-
Notifications
You must be signed in to change notification settings - Fork 485
run through all applicable network policies for a flow #915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
97b022f to
e5a9900
Compare
1f59e78 to
129bb94
Compare
… run through all the applicable network policies for a flow
129bb94 to
b322dd4
Compare
|
This PR fixes specific case where both source pod and destination pod are on the same node. An egress network policy applied on the source pod may permit traffic from the source pod to destination pod, but a seperate network policy applied to destination pod may reject it. So for the flow both the network policies to be applied to permit/deny traffic. Apart from verifying the fix address the above case ensure there are no regression. Merging the PR based on testing. |
| err := iptablesCmdHandler.AppendUnique("filter", policyChainName, args...) | ||
|
|
||
| markComment := "rule to mark traffic matching a network policy" | ||
| markArgs := append(args, "-j", "MARK", "-m", "comment", "--comment", markComment, "--set-xmark", "0x10000/0x10000") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I take it that you chose this value, because it's above this range:
| return h.Sum32() & 0x3FFF, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No other use of Markings. Just choose random numbers.
|
Seems reasonable, and I can't think of a better approach, but it's a bummer that it essentially doubles the amount of iptables rules that we're going to have though. Overall LGTM. |
Thanks for the review. Actually its not bad. Or atleast it does not make any worse performance wise. Since kube-router is stateful firewall, for the established sessions (i.e. "RELATED,ESTABLISHED") we still |
|
Sorry I should have been more clear. I was referring to the time that it takes to perform the full sync on iptables, not the performance of the data path. You're correct, this doesn't have a huge impact on the data path. |
… run through (cloudnativelabs#915) all the applicable network policies for a flow
all the applicable network policies for a flow
Fixes #916