Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 6443/TCP to webhook egress NetworkPolicy #5788

Merged
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 2 additions & 0 deletions deploy/charts/cert-manager/values.yaml
Expand Up @@ -420,6 +420,8 @@ webhook:
protocol: TCP
- port: 53
protocol: UDP
- port: 6443
protocol: TCP
Comment on lines +425 to +426
Copy link
Member

@maelvls maelvls Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mentioned that the error message was about port 443 over TCP:

webhook: error building admission chain: Get https://172.30.0.1:443/api: dial tcp 172.30.0.1:443: i/o timeout

This indicates that the webhook pod is trying to connect to kube-apiserver on 443/TCP.

How does it relate to 6443/TCP?

Copy link
Contributor Author

@ExNG ExNG Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats the thing i don't understand but didn't bother to investigate any further since it obviously wants to connect to the kubernetes api which is on 6443.

Copy link
Member

@maelvls maelvls Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think I understand now. The Kubernetes API server actually listens on 6443/TCP, and the Service default/kubernetes "listens" on 443/TCP:

# kubectl get svc kubernetes -oyaml
apiVersion: v1
kind: Service
metadata:
  name: kubernetes
  namespace: default
spec:
  clusterIP: 10.0.0.1
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 6443

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks plausible, i actually did check the ip from the error but didnt find a service with it, but might have overlooked it.

Copy link
Member

@maelvls maelvls Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the packets flow like this:

 worker node                                  
 host IP:  100.64.0.2                         
 pod cidr: 10.28.0.0/24                       
 +-------------------------------------------+
 |                                           |
 |   +----------------------------------+    |
 |   |     cert-manager-webhook pod     |    |
 |   |                                  |    |
 |   | src: 10.28.0.5:60123 (podIP)     |    |
 |   | dst: 172.30.0.1:443  (clusterIP) |    |
 |   |             |                    |    |
 |   +-------------|--------------------+    |
 |                 |                         |
 |                 v                         |
 |      src: 10.28.0.5:60123    (podIP)      |
 |     -dst: 172.30.0.1:443     (clusterIP)  |
 |     +dst: 100.64.0.1:6443                 |
 |                 |                         |
 |                 |                         |
 |                 |                         |
 |                 |                         |
 +-----------------|-------------------------+
                   |                          
                   |                          
                   X   REFUSED                
                   |                          
                   |                          
                   v                          
    +-----------------------------------+     
    |         kube-apiserver            |     
    +-----------------------------------+     
    control plane node                        
    host IP: 100.64.0.1   

What I wonder is... Why is this egress rule needed? I imagine OKD ships with a NetworkPolicy for accessing kube-apiserver from any pod, no?

Copy link
Member

@maelvls maelvls Feb 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it work the same if you configure an ingress rule for the control plane so that traffic on 6443/TCP from 0.0.0.0/0 is allowed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather not try this, not only because an ingress allow rule for the apiserver pod does not allow egress from the webhook pod.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. Actually, an ingress would not even have an effect since NetworkPolicy only works from/to pods, but the kube-apiserver process runs in the host namespace (cf "[You can't] prevent incoming host traffic", source).

To sum up:

  • By default, network policies are "allow all", including with OVN-kubernetes (source).

  • The cert-manager controller is able to talk to the Kubernetes API server without a problem since there is no network policy attached to it, thus "allow all".

  • Since you use --set webhook.networkPolicy=true, traffic from and to the cert-manager webhook is "deny all" with the ingress and egress exceptions given in values.yaml.

  • Among these exceptions, 443/TCP seems to allow traffic to the Kubernetes API server.

  • But it doesn't work for you because of the clusterIP re-writing (dst 172.30.0.1:443 is changed to dst 100.64.0.1:6443).

    $ k get endpoints -n default kubernetes
    NAME         ENDPOINTS             AGE
    kubernetes   100.64.0.1:6443   26d

What I was wondering was: why is it working for other people but not you?

I think I understand: most Kubernetes clusters use 443 to expose kube-apiserver. For example, on GKE:

$ k get endpoints -n default kubernetes
NAME         ENDPOINTS            AGE
kubernetes   104.199.89.236:443   3y140d

I am now certain that this problem will also help other users, since OpenShift and OKD use the port 6443 for kube-apiserver. I am confused as to why this issue hasn't popped up earlier, but maybe there aren't so many OpenShift clusters with a network policy controller running!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For anyone else looking at the values.yaml file, I think it is worth adding a comment explaining why egress 6443 is needed.

Suggested change
- port: 6443
protocol: TCP
# On OpenShift and OKD, the Kubernetes API server listens on
# port 6443.
- port: 6443
protocol: TCP

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh now is see what you where asking, sorry i completely forgot its not default for the api to be on :6443 , but yes exactly!

to:
- ipBlock:
cidr: 0.0.0.0/0
Expand Down