-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NetworkPolicy's Egress is not working #1617
Comments
Can you share the output to the following commands? kubectl get namespace kube-system -o yaml
kubectl get namespace test -o yaml
kubectl get namespaces -l kubernetes.io/metadata.name=kube-system
kubectl get pods -n kube-system -l app.kubernetes.io/name=coredns |
|
Thanks for the output information. I'm having a hard time reproducing this issue. Are you able to give the following:
|
So I made some more tests this evening and here are the results:
|
Forgot the logs
|
I upgraded to 2.1.0, result is the same - reject on service IPs. Direct requests to pods work |
@vladimirtiukhtin - Sorry for the run around on this one. I always forget this routing scenario as it is a bit obscure. But I have a good understanding of what you're doing now and I can try to help explain what's happening. Essentially, what happens is that your traffic flow looks like the following:
The solution for this is to allow list all of your node IPs in your network policy. I know that this is a hassle, but there essentially isn't anything that CNI's can do about it. Basically, the problem is that network policies were created in terms of pod communication and are service agnostic. This means that they don't have good semantics for this type of traffic flow. Here are some additional resources that talk about this topic a bit (although not all the details are 1:1 to your scenario since many of them are talking about traffic flowing from external nodes, but its essentially the same issue): |
Hi @aauren. Thank for the response. Your description is correct till
By allowing all traffic I get
Then exec
On the server pod node
On client pod node
As you see, no SNAT happening. I also checked, if I add clusterIP as the ipblock to egress it works. But I expect kube-router to resolve clusterIP on its own |
Are you adding This is the only way that kube-router is aware of your IP ranges since they can vary by Kubernetes orchestration method. If not, if you add that, does this fix the problem you're seeing? |
No, but by adding this flag issue disappears. Thanks a lot, this was not obvious. BTW when I was doing that I ran
|
Thanks for reporting |
Tracking the cleanup-config issue in #1649 so I'm going to close this one now I'm glad that adding the Or, if you have a time, a PR for additional docs is always welcome! |
What happened?
Network policy Egress rules are not behaving as expected
What did you expect to happen?
Egress to work
How can we reproduce the behavior you experienced?
Create a policy like below
Such policy should allow traffic between pods in the same namespace and to/from kube-system. While ingress traffic indeed works, egress does not. I get failures in name resolution. But if I modify egress to
DNS start work. However traffic between pods in the same namespace still bumps into
connection refused
unless I allow all egress** System Information (please complete the following information):**
kube-router --version
): [e.g. 1.0.1]kubectl version
) : [e.g. 1.18.3]** Logs, other output, metrics **
Additional context
I can see messages in the logs like there #1521 (comment)
The text was updated successfully, but these errors were encountered: