-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
matchLabels
under NetworkPolicy
's main podSelector
section is matching by ANY label match, not EVERY label match
#1135
Comments
@aanandr can you please take a look? |
@uipo78 Can you please confirm you are using Azure network policies? |
I'm using standard Kubernetes network policies on clusters using the Azure CNI. |
@uipo78 - the matching should be ANY mapping. Please see this link - https://kubernetes.io/docs/concepts/services-networking/network-policies/ |
@aanandr I don't see anything about the behavior of label selectors, particular to network policies, in the docs that you referenced. |
I'd also be surprised if label selectors behave differently for network policies than they do generally: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements |
@uipo78 - I misread your earlier description. My apologies. You are right - if there are multiple match labels to select a Pod then they should use the AND clause. This is a bug and it has just been reported by a few other customers too. We are actively working on a fix. |
I noticed that the most recent release includes this bullet:
Is that related to this issue? |
bueller? |
@uipo78 - yes we rolled out a bunch of fixes recently and one of them fixes the issue reported here. I can confirm that. |
Could you confirm the fixes have already been rolled out? I have been told by support that they won't be landing until 2019-09-20? I also appear to still have the same version of azure-npm that I have had for the last couple of weeks - is there a version number they are fixed in please? |
This is the release to which I was referring @ball-hayden: https://github.com/Azure/AKS/releases/tag/2019-08-19. I'm going to close this issue, since @aanandr confirmed that the fix is part of the release mentioned above (that's where my bullet point comes from). |
Heads up; this weeks build does contain additional fixes, i'll reopen and then link the release going out now when the release notes are published. |
Ah ok, perfect. I was going to reopen this issue anyway because it doesn't appear to be resolved in 1.14.6 as mentioned earlier. |
FYI the image contains the fix is here: |
When and how does this change propagate out to managed AKS clusters? I see we are currently at |
@aanandr , could you tell me if this bug would also cause egress traffic to be blocked? If I use the example policy to "allow all egress traffic", I still have egress traffic blocked. https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-allow-all-ingress-traffic |
Hi @blaw2422 the fix is here: Azure/azure-container-networking#398 |
Hi @kagkarlsson it should be updated weekly. Did your cluster get the update? |
Ok, good to know that it is supposed to be weekly updates. To be honest I cannot confirm we have gotten the update because we switched to Calico after running into what seemed like bugs with the other implementation. |
I didn't get the update. We're running on Kubernetes |
It's also worth mentioning that I reconstructed a cluster yesterday, and even then, the tag for |
Is there a project manager that can provide accurate details on the release of this fix? At this point, it feels like the AKS team is conjecturing. |
Just to confirm: we're observing this issue's original problem in |
@uipo78 - apologize for the inconvenience. The fix for this issue is currently in aks-engine and we are working with the AKS team to get it rolled out to AKS also. |
I must admit I'm also looking forward to a fix here. I have what I believe is the same problem in AKS v1.15.7 (and earlier versions). My understanding and experience is that using both Egress and Ingress Network Policies gives a somewhat unpredictable and unstable result in AKS (it has worked fine with Calico for years). This means that we have to choose between using Egress or using Ingress network policies in our Namespaces. And this somewhat limits our options. We have to decide if we are to protect against a break in or a break out of pods. But we can't do both at the same time, that is: we can't both deny network access from the namespace to resources outside the AKS cluster and restrict network access to the namespace from the outside at the same time. It would be nice with an update if I've misunderstood the implications of this problem with AKS network policies. |
Action required from @Azure/aks-pm |
Issue needing attention of @Azure/aks-leads |
The fix for this OP should be released some months back. If you still experience the same behavior as OP please do comment back. |
What happened:
matchLabels
underNetworkPolicy
's mainpodSelector
section appear to select labels by ANY matching, not by every label matching that's under that section. This contradicts this section of the Kubernetes docs.What you expected to happen:
All labels under
matchLabels
of theNetworkPolicy
's mainpodSelector
must match in order for the network policy to apply to a pod.How to reproduce it (as minimally and precisely as possible):
Suppose I have two hello world apps, one with a service named
hello-world-1
and another with a service namedhello-world-2
. Both are exposed at port 8000. Suppose further that each share the labelapp=hello-world
, whilehello-world-1
also has the labelnumber=one
andhello-world-2
has the labelnumber=two
. If I deploy both in the same namespace and have the following network policies in place, I expecthello-world-1
to be the only one who receives traffic:However, both apps receive traffic.
Anything else we need to know?:
Nope
Environment:
kubectl version
):1.13.7
The text was updated successfully, but these errors were encountered: