Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure LB and Network Policy Issue #30134

Closed
m-raman opened this issue Apr 26, 2019 — with docs.microsoft.com · 8 comments
Closed

Azure LB and Network Policy Issue #30134

m-raman opened this issue Apr 26, 2019 — with docs.microsoft.com · 8 comments

Comments

Copy link

m-raman commented Apr 26, 2019

My setup looks like this:

Service (Azure Load Balancer in its own subnet - 10.10.14.0/24) --> AKS Cluster (Deployed in different subnet - 10.10.15.0/24).

Front End pod a web server with IP - 10.10.15.21

I want to ensure the pod can only be reached from the Azure Load Balancer.

The moment I apply the following network policy to allow traffic to only come through LB i.e. 10.10.14.0/24 subnet, the setup stops working:

piVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-azure-vote-front
namespace: default
spec:
podSelector:
matchLabels:
app: azure-vote-front
policyTypes:

  • Ingress
    ingress:
  • from:
    • ipBlock:
      cidr: 10.10.14.0/24

Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

@jakaruna-MSFT
Copy link
Contributor

Thanks for the feedback! We are currently investigating and will update you shortly.

@jakaruna-MSFT
Copy link
Contributor

@m-raman similar issue is here #30133 .
I will close this issue and we will track the improvements in the other issue.

Copy link
Author

m-raman commented Apr 26, 2019

@jakaruna-MSFT Apologies for the confusion. The two issues are different. Please re-open this. This scenario has nothing to do with Calico network policy. Here I tried using Azure network policy. Like I mentioned I wanted to only receive traffic from LB subnet (10.10.14.0/24) to app labelled azure-vote-front. And not from any other subnet within the vnet. But the moment I whitelist Azure load balancer subnet, every access through the Azure Load Balancer or other subnets within the vnet to app labeled azure-vote-front stops working.

I don't think this is normal. On applying the network policy in my original post, I should be able to access azure-vote-front through the Load Balancer.

Below are the details of my deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: redis
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis

apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:

  • port: 6379
    selector:
    app: azure-vote-back

apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"

apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "lbsubnet"
spec:
type: LoadBalancer
loadBalancerIP: 10.10.14.14
ports:

  • port: 80
    selector:
    app: azure-vote-front

Azure Network Policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-azure-vote-front
namespace: default
spec:
podSelector:
matchLabels:
app: azure-vote-front
policyTypes:

  • Ingress
    ingress:
  • from:
    • ipBlock:
      cidr: 10.10.14.0/24

@jakaruna-MSFT jakaruna-MSFT reopened this Apr 26, 2019
@jakaruna-MSFT
Copy link
Contributor

@m-raman I deployed the AKS cluster.
AKS subnet's CIDR is 10.240.0.0/16. Created a new subnet with CIDR10.0.0.0/24.
Then created a ubuntu instance in the new subnet. Later created a network policy which applies to a nginx pod as shown below.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: backend-policy
  namespace: development
spec:
  podSelector:
    matchLabels:
      app: webapp
      role: backend
  ingress:
  - from:
    - ipBlock:
        cidr: 10.0.0.0/24

With this policy i was able to access that nginx pod from the ubuntu instance. Traffic is blocked from other pods.

Nginx logs. Ubuntu instance ip is 10.0.04. 10.240.0.67 is another pods ip which i used to test before applying the policy.

PS D:\playground\aks> k logs backend -f -n development
10.240.0.67 - - [30/Apr/2019:09:43:52 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget" "-"
10.240.0.82 - - [30/Apr/2019:10:27:07 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget" "-"
10.0.0.4 - - [30/Apr/2019:12:32:34 +0000] "GET / HTTP/1.1" 200 612 "-" "Wget/1.19.4 (linux-gnu)" "-"

I tested this scenario with 2 clusters.

  • AKS (1.11.9 + calico)
  • AKS (1.12.6 + Azure networking plugin)

Can you try once again and let me know your results.

@m-raman
Copy link
Author

m-raman commented May 2, 2019

@jakaruna-MSFT I know this works. This is not the problem I have highlighted.

Setup:

image

Working Access Through 10.10.14.14 (Internal Load Balancer IP)

image

After applying Azure Network Policy in my earlier message, access through the load balancer IP stops working.

image

This is not the expected behavior.

@jakaruna-MSFT
Copy link
Contributor

@m-raman Got it. I didn't note the annotation on the service.
This will happen for all Cloud load balancers which uses the Nodeport in the background. Its happening because the kube proxy masks the incoming ip.
We can overcome this by setting the externalTrafficPolicy to Local or setting an annotation to the service.
Please look at "Preserving the client source IP".
Related issue is here

In your environment your kube-proxy will have an ip like 10.10.15.* When we access the azure-vote-front service, from a client(10.10.14.*), the real client ip is not passed. Only the kube proxy ip is passed to the azure-vote-front. When we set the externalTrafficPolicy to local, Client ip is preserved.

Also note that warning about the load distribution below. As long as you run only one pod per deployment in one node, You are fine.

service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: “Cluster” (default) and “Local”. “Cluster” obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. “Local” preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.

I verified it in my environment and it works well. Please try this out and let me know.

@jakaruna-MSFT
Copy link
Contributor

@m-raman I will close this out for now. If you need additional help please mention me in the comment and we can reopen and continue.

@JamesDLD
Copy link

Just add the same concern, the solution was to authorize the cidr of the Azure Subnet where you built your AKS cluster. It's actually where the Azure Load Balancer will pool it's IP.
No need to overcome the service setting externalTrafficPolicy to Local, you can still leave the default option which is Cluster.

The Network Policy looks like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: myapp
  namespace: demo
spec:
  egress:
  - {}
  ingress:
  - from:
    - ipBlock:
        cidr: 192.168.0.0/22 #This is the Azure Subnet where you built your AKS cluster
    ports:
    - port: 8000 #Deployment container_port
      protocol: TCP
  podSelector:
    matchExpressions:
    - key: front
      operator: In
      values:
      - myapp-webserver
  policyTypes:
  - Ingress
  - Egress

The Service looks like this:

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
  labels:
    front: myapp-webserver
  name: myapp
  namespace: demo
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.0.x.x
  clusterIPs:
  - 10.0.x.x
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 192.168.0.34
  ports:
  - name: http
    nodePort: 30899
    port: 8001
    protocol: TCP
    targetPort: 8000 #Deployment container_port
  selector:
    front: myapp-webserver
  sessionAffinity: None
  type: LoadBalancer

And make sure you Load Balancer is healthy -->
image

@PRMerger7 PRMerger7 added the Pri2 label Jun 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants