-
Notifications
You must be signed in to change notification settings - Fork 16.8k
-
Notifications
You must be signed in to change notification settings - Fork 16.8k
Skipping clusterrole and clusterrolebinding when scope is restricted does not make sense #8914
Comments
Why does the controller browse the nodes? Does the controller need to browse the nodes and the current deployment has a bug? |
I believe the controller browses the nodes as part of the election process. It may also need access for certain cloud provider types. Either way, it is definitely not a bug.. as long as I’ve been using RBAC with k8s I’ve had to include some cluster scope privs for nginx-ingress. |
If it's not a bug of the nginx ingress controller then it it's a bug of the helm deployment? With the error log both can't be right, one of them must have a bug/misconfiguration. Am I'm missing something? |
It's a bug in the helm chart - the assumption was made that if the controller were created in a namespace scope that it wouldn't have access to cluster-level activities. My suggestion would be to revert that assumption and instead make it a flag. That said, I'm not sure the ingress will behave consistently so long as it doesn't have access to the nodes, which may suggest simply reverting the original PR. |
+1, it's #6186, authored by @danigrmartinez and merged by @unguiculus. The author stated:
Can confirm, it doesn't work correctly when you specify a different namespace. I want the
The PR makes it impossible to reach line 24, which grants access to the namespace in |
@schottsfired thanks for ping me. The change was made to enable deploy the ingress controller to a cluster where you have access just to your namespace, nothing else. @cdaniluk Definitely that is not ideal, nginx-ingress should have a way to disable the listing of the nodes for those cases. Like the HAproxy ingress has My bad, I didn't go deep on all the cases. Please revert and I will make sure to add the right flag to disable ClusterRoles for cases like I am. |
SGTM! @unguiculus, would you kindly revert #6186? Thanks |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
This is still broken. I just ran through the same use case @schottsfired did and I'm still running into the issue. |
Hi, I have also just run into this problem, it seems not to be fixed |
still an issue |
@danigrmartinez and @unguiculus sounds like there is still an issue here. Can this be re-opened? Thank you |
I been looking a bit into it... for other people who arrive to this issue, we're dealing with an upstream problem in k8s nginx ingress controller kubernetes/ingress-nginx#3887 So yes, it seems cluster roles won't be required any more once you get the proper version of k8s ingress controller (0.24.0 or higher), from the issue the errors seem to be transient (can't confirm). For more information, please refer to #12510 (comment) EDIT: follow up kubernetes/ingress-nginx#3817 |
I tried deploying the latest helm chart and i still get this issue, pinning the helm chart to 1.6.8 solves it. am i doing something wrong? |
@severity1 yeah same here. Please have a look at kubernetes/ingress-nginx#3817 |
PR #6186 skips setting clusterrole and clusterrolebinding when the scope is set. This isn't always the right decision. As one example, you could deploy to a default namespace but monitor a more restrictive namespace for services.
More importantly, it creates obnoxious logging as the nginx controllers browse nodes by design. See examples below of what gets spammed in the logs when the cluster uses rbac but the cluster role is not created:
E1031 00:12:11.999723 6 main.go:45] Error getting node ip-10-....ec2.internal: nodes "ip-10-....ec2.internal" is forbidden: User "system:serviceaccount:dev:fun-toad-nginx-ingress" cannot get nodes at the cluster scope
Further,
template/clusterrole.yaml
already had different logic in cases where rbac was true but scope was restricted by namespace.I think it would be more sensible to introduce a separate disableClusterRole attribute to either the rbac or scope sections. rbac seems to make more sense. I can submit a PR on this but wanted to make sure I wasn't missing anything first.
The text was updated successfully, but these errors were encountered: