Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mentioned that the error message was about port 443 over TCP:
This indicates that the webhook pod is trying to connect to kube-apiserver on 443/TCP.
How does it relate to 6443/TCP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats the thing i don't understand but didn't bother to investigate any further since it obviously wants to connect to the kubernetes api which is on 6443.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think I understand now. The Kubernetes API server actually listens on 6443/TCP, and the Service
default/kubernetes
"listens" on 443/TCP:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks plausible, i actually did check the ip from the error but didnt find a service with it, but might have overlooked it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the packets flow like this:
What I wonder is... Why is this egress rule needed? I imagine OKD ships with a NetworkPolicy for accessing kube-apiserver from any pod, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it work the same if you configure an ingress rule for the control plane so that traffic on 6443/TCP from 0.0.0.0/0 is allowed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather not try this, not only because an ingress allow rule for the apiserver pod does not allow egress from the webhook pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. Actually, an ingress would not even have an effect since NetworkPolicy only works from/to pods, but the kube-apiserver process runs in the host namespace (cf "[You can't] prevent incoming host traffic", source).
To sum up:
By default, network policies are "allow all", including with OVN-kubernetes (source).
The cert-manager controller is able to talk to the Kubernetes API server without a problem since there is no network policy attached to it, thus "allow all".
Since you use
--set webhook.networkPolicy=true
, traffic from and to the cert-manager webhook is "deny all" with the ingress and egress exceptions given in values.yaml.Among these exceptions, 443/TCP seems to allow traffic to the Kubernetes API server.
But it doesn't work for you because of the clusterIP re-writing (dst 172.30.0.1:443 is changed to dst 100.64.0.1:6443).
What I was wondering was: why is it working for other people but not you?
I think I understand: most Kubernetes clusters use
443
to expose kube-apiserver. For example, on GKE:I am now certain that this problem will also help other users, since OpenShift and OKD use the port 6443 for kube-apiserver. I am confused as to why this issue hasn't popped up earlier, but maybe there aren't so many OpenShift clusters with a network policy controller running!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For anyone else looking at the values.yaml file, I think it is worth adding a comment explaining why egress 6443 is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh now is see what you where asking, sorry i completely forgot its not default for the api to be on :6443 , but yes exactly!