Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkPolicy Rules not working with Services #2088

Closed
ecowden opened this issue Jul 31, 2018 · 4 comments
Closed

NetworkPolicy Rules not working with Services #2088

ecowden opened this issue Jul 31, 2018 · 4 comments

Comments

@ecowden
Copy link

ecowden commented Jul 31, 2018

NetworkPolicy Ingress rules are applied when connecting to Pods directly, but not when connecting through a Service. Services are of type ClusterIP.

For reference, the NetworkPolicy may look like,

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-example-app-ingress
spec:
  podSelector:
    matchLabels:
      app: example-app
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        # My workstation's CIDR
        cidr: ...

Conversely, If I create a NetworkPolicy rule that allows traffic from the host network, however, traffic is allowed through the Service regardless of the source (wrong), and clients cannot connect directly to pods (right).

Looks the same as issue #1683.

I've verified that kube-proxy's --cluster-cidr=... argument is set to the Pod network, and that it excludes the Service network.

Expected Behavior

NetworkPolicy Ingress rules should be enforced based on the original client when routing through Services, not the host where kube-proxy is running.

Current Behavior

NetworkPolicy through services is enforced based on the location of the kube-proxy instance.

Possible Solution

Unsure.

Steps to Reproduce (for bugs)

  1. Create a Pod or other workload in a Kubernetes cluster with Calico configured in a BGP configuration.
  2. Create a Service directing traffic to the Pod.
  3. Create NetworkPolicy Ingress rules to allow and deny traffic as desired.
  4. NetworkPolicy rules are enforced correctly when communicating with Pods directly. When directing traffic through the Service, NetworkPolicy is enforced based on the Service's location, not the original client.

Context

This prevents us from using Kubernetes NetworkPolicy to control access to workloads in the Kubernetes cluster.

Your Environment

  • Calico version: 2.6.8
  • Orchestrator version (e.g. kubernetes, mesos, rkt): Kubernetes v1.11.0
  • Operating System and version: RHEL v7.4

Kubernetes is configured using Calico with BGP pairing to make the Pod network routable from outside the cluster.

Thanks in advance!

Edit: The original version of this issue was posted prematurely with an errant mouse click. Apologies for any confusion!

@tmjd
Copy link
Member

tmjd commented Aug 9, 2018

I'm guessing based from the comment in your NetworkPolicy, that you want policy to apply to traffic coming from outside of the cluster. With a Service of type ClusterIP, any traffic (from inside the cluster) will have the client's source IP but traffic from outside the cluster will need to be SNAT'ed. There are no Kubernetes considerations for NetworkPolicy to apply to services.
You can read https://kubernetes.io/docs/concepts/services-networking/network-policies/ and see that NetworkPolicy only applies to pods, which I believe is the behavior you are seeing.

With Calico one thing you can do is use Host Protection to limit access to the Node and then you could create Calico GlobalNetworkPolicy to limit access to the traffic. Check out https://docs.projectcalico.org/v3.1/getting-started/bare-metal/bare-metal

@caseydavenport
Copy link
Member

Yep, as tmjd mentioned the way to do this is using something like Calico host enforcement to control access from outside the cluster, or to use services with externalTrafficPolicy: local which won't perform SNAT on the traffic.

**NetworkPolicy Ingress rules should be enforced based on the original client when routing through Services, not the host where kube-proxy is running.

This is currently not feasible given that k8s NP is defined in terms of pods, not services, and so prior to hitting the kube-proxy we don't know the ultimate destination of the traffic.

@inish777
Copy link

@caseydavenport @tmjd I have the same problems as @ecowden and as I understood traffic from pod network (specified in --cluster-cidr option of kube-proxy) should not be SNATed. But I have IP pool with nat-outgoing: true and I have suspection that traffic to K8s service ip range is masqueraded. If so, it would be good to get rid of this, but I need masquerading for connection to resources outside of pod- and service- networks. Any thoughts?

@ecowden
Copy link
Author

ecowden commented Oct 13, 2018

Ultimately, we decided to go with an external and explicit load balancing solution. In our case, we wrote an operator using kube-builder to make the load balancer do exactly what we want, because of some very specific requirements. For a less work-intensive solution, you may want to consider a tool like metalLB.

Good luck! 😁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants