Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkPolicies still block egress traffic to OIDC provider #8407

Closed
dakraus opened this issue Dec 7, 2021 · 4 comments · Fixed by #8419
Closed

NetworkPolicies still block egress traffic to OIDC provider #8407

dakraus opened this issue Dec 7, 2021 · 4 comments · Fixed by #8419
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/networking Denotes a PR or issue as being assigned to SIG Networking.

Comments

@dakraus
Copy link
Contributor

dakraus commented Dec 7, 2021

Description
The changes implemented as part of #8255 unfortunately don't allow egress traffic from the apiserver of a user cluster to the OIDC provider in all cases. Given the following situation, the network policy oidc-issuer-allow will still not allow the outgoing traffic:

  1. The cloud-controller-manager uses the IP address (instead of e.g. the DNS name) for field .status.loadBalancer.ingress of a service of type LoadBalancer
  2. The NGINX ingress controller is exposed via a service of type LoadBalancer
  3. The OIDC provider is exposed via an Ingress
  4. The OIDC provider and the apiserver of a user cluster are running in the same Kubernetes cluster (e.g. the Kubernetes cluster hosting the KKP components is also used as seed cluster)

In the situation described above the outgoing requests from the apiserver to the OIDC provider will not be routed to the external IP address of the loadbalancer, but directly to the cluster-internal IP addresses of the endpoints of the NGINX ingress controller service and therefore bypass the loadbalancer completely. Since the network policy oidc-issuer-allow whitelists only the external IP address of the configured OIDC provider (or in this case the external IP address of the loadbalancer), all requests will be blocked.

This behavior is known and described in kubernetes/kubernetes#66607 and KEP-1860.

Steps to reproduce

  • Provision Kubernetes cluster with KubeOne on GCP
  • Install KKP CE as described in the official documentation
  • Add existing Kubernetes cluster as seed
  • Enable the functionality to share clusters via delegated OIDC authentication
  • Download the kubeconfig for a user cluster via the link received from the "Share" button in the Kubermatic dashboard
  • Try to connect to the user cluster e.g. via kubectl --kubeconfig=... cluster-info
  • You should receive the following error message:
error: You must be logged in to the server (Unauthorized)

Environment

  • Cloud provider: GCP
  • Cloud controller manager: in-tree
  • Kubernetes version (master/seed): 1.22.4
  • CNI plugin: canal
  • Kubermatic edition: CE
  • Kubermatic version: v2.18.3
@dakraus dakraus added kind/bug Categorizes issue or PR as related to a bug. sig/networking Denotes a PR or issue as being assigned to SIG Networking. labels Dec 7, 2021
@rastislavs rastislavs self-assigned this Dec 7, 2021
@rastislavs
Copy link
Contributor

Thanks for reporting @dakraus, will take care of this case as well.

@dakraus
Copy link
Contributor Author

dakraus commented Dec 7, 2021

You're welcome and thank you for taking care of this one.

@rastislavs
Copy link
Contributor

rastislavs commented Dec 8, 2021

For the record, it seems that the issue manifests itself only in the iptables kube-proxy mode (wasn't able to reproduce with IPVS).

@dakraus
Copy link
Contributor Author

dakraus commented Dec 8, 2021

@rastislavs the cluster where I discovered this behaviour is running kube-proxy in ipvs mode. This is the KubeOne manifest of the cluster (used as KKP master/seed cluster):

apiVersion: kubeone.io/v1beta1
kind: KubeOneCluster
name: kkp-master
versions:
  kubernetes: 1.22.4
cloudProvider:
  gce: {}
  cloudConfig: |-
    [global]
    regional  = true
clusterNetwork:
  kubeProxy:
    ipvs: {}

Feel free to contact me, if you need any more information - the cluster is currently up and running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/networking Denotes a PR or issue as being assigned to SIG Networking.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants