-
Notifications
You must be signed in to change notification settings - Fork 260
Closed
Description
On a fresh AKS Cluster running k8s 1.16.7 i deploy 2 network policies.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy
namespace: default
spec:
podSelector:
matchLabels:
app: webapp
role: backend
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: webapp
role: frontend
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy2
namespace: default
spec:
podSelector:
matchLabels:
app: webapp
role: backend
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: test
After the second rule (the order does not matter) was deployed, azure-npm creates a
DROP-ALL-FROM-app:webapp-AND-role:backend-IN-ns-default *
in the Chain AZURE-NPM-TARGET-SETS. From now on these pods can not reach anything and this rule is not removed even when i delete all network policies. The only way to get this labeled pods back a connection is to delete the azure-npm in kube-system which triggers a restart and the iptables rule is removed.
We come from a self hosted k8s with calico where the same network policies run without problems.
Cheers,
Mike
Metadata
Metadata
Assignees
Labels
No labels