-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
201-aks-rbac-dashboard-admin patch #148
201-aks-rbac-dashboard-admin patch #148
Conversation
8ec2a08
to
8ce4d95
Compare
name = "${var.name}-${var.environment}" | ||
} | ||
|
||
resource "azuread_service_principal" "default" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The readme.md file has a table with the resources that should be updated.
# Grant cluster-admin rights to the kubernetes-dashboard account. | ||
# THIS IS NOT RECOMMENDED IN PRODUTION | ||
## Grant cluster-admin rights to the kubernetes-dashboard account. | ||
## THIS IS NOT RECOMMENDED IN PRODUTION | ||
resource "kubernetes_cluster_role_binding" "dashboard" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not familiar with this example. I dont understand why the kubernetes dashboard is not installed. Only this role-binding is created before installing the dashboard.
The problem is that after running terraform the user would still have to run manually:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
But this conflicts with this role binding created via Terraform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what is installed via Terraform
% k get clusterrolebinding kubernetes-dashboard -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2023-02-21T09:44:21Z"
name: kubernetes-dashboard
resourceVersion: "1564"
uid: 65eed073-a03b-416b-b802-239b9f540b18
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
This is what installed by the Dashboard installed:
% k get clusterrolebinding kubernetes-dashboard -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kubernetes-dashboard"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"kubernetes-dashboard"},"subjects":[{"kind":"ServiceAccount","name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"}]}
creationTimestamp: "2023-02-21T10:01:01Z"
name: kubernetes-dashboard
resourceVersion: "5563"
uid: e9b28f41-730a-4ff3-a265-baf6f5733a3e
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
I think the example should also install the dashboard with the desired configuration. Here the user experience is very bad, because creating the |
Thanks @zioproto for your review. I think we can open an issue on the user experience improvement and leave it as another day's job. |
This patch fix 201-aks-rbac-dashboard-admin, use identity instead of service principal.