-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Azure LB and Network Policy Issue #30134
Comments
Thanks for the feedback! We are currently investigating and will update you shortly. |
@jakaruna-MSFT Apologies for the confusion. The two issues are different. Please re-open this. This scenario has nothing to do with Calico network policy. Here I tried using Azure network policy. Like I mentioned I wanted to only receive traffic from LB subnet (10.10.14.0/24) to app labelled azure-vote-front. And not from any other subnet within the vnet. But the moment I whitelist Azure load balancer subnet, every access through the Azure Load Balancer or other subnets within the vnet to app labeled azure-vote-front stops working. I don't think this is normal. On applying the network policy in my original post, I should be able to access azure-vote-front through the Load Balancer. Below are the details of my deployment manifest: apiVersion: apps/v1
|
@m-raman I deployed the AKS cluster. kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy
namespace: development
spec:
podSelector:
matchLabels:
app: webapp
role: backend
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/24 With this policy i was able to access that nginx pod from the ubuntu instance. Traffic is blocked from other pods. Nginx logs. Ubuntu instance ip is 10.0.04. 10.240.0.67 is another pods ip which i used to test before applying the policy.
I tested this scenario with 2 clusters.
Can you try once again and let me know your results. |
@jakaruna-MSFT I know this works. This is not the problem I have highlighted. Setup: Working Access Through 10.10.14.14 (Internal Load Balancer IP) After applying Azure Network Policy in my earlier message, access through the load balancer IP stops working. This is not the expected behavior. |
@m-raman Got it. I didn't note the annotation on the service. In your environment your kube-proxy will have an ip like 10.10.15.* When we access the azure-vote-front service, from a client(10.10.14.*), the real client ip is not passed. Only the kube proxy ip is passed to the azure-vote-front. When we set the externalTrafficPolicy to local, Client ip is preserved. Also note that warning about the load distribution below. As long as you run only one pod per deployment in one node, You are fine.
I verified it in my environment and it works well. Please try this out and let me know. |
@m-raman I will close this out for now. If you need additional help please mention me in the comment and we can reopen and continue. |
Just add the same concern, the solution was to authorize the cidr of the Azure Subnet where you built your AKS cluster. It's actually where the Azure Load Balancer will pool it's IP. The Network Policy looks like this: apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp
namespace: demo
spec:
egress:
- {}
ingress:
- from:
- ipBlock:
cidr: 192.168.0.0/22 #This is the Azure Subnet where you built your AKS cluster
ports:
- port: 8000 #Deployment container_port
protocol: TCP
podSelector:
matchExpressions:
- key: front
operator: In
values:
- myapp-webserver
policyTypes:
- Ingress
- Egress The Service looks like this: apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
front: myapp-webserver
name: myapp
namespace: demo
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.x.x
clusterIPs:
- 10.0.x.x
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerIP: 192.168.0.34
ports:
- name: http
nodePort: 30899
port: 8001
protocol: TCP
targetPort: 8000 #Deployment container_port
selector:
front: myapp-webserver
sessionAffinity: None
type: LoadBalancer |
My setup looks like this:
Service (Azure Load Balancer in its own subnet - 10.10.14.0/24) --> AKS Cluster (Deployed in different subnet - 10.10.15.0/24).
Front End pod a web server with IP - 10.10.15.21
I want to ensure the pod can only be reached from the Azure Load Balancer.
The moment I apply the following network policy to allow traffic to only come through LB i.e. 10.10.14.0/24 subnet, the setup stops working:
piVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-azure-vote-front
namespace: default
spec:
podSelector:
matchLabels:
app: azure-vote-front
policyTypes:
ingress:
cidr: 10.10.14.0/24
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
The text was updated successfully, but these errors were encountered: