-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Enable namespaced installs via helm chart #860
Conversation
Thank @alex-treebeard for this contribution! Would you mind checking the lint errors of the charts? You can follow this document to reproduce the lint errors on your laptop. |
@kevin85421 thanks, linted! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this contribution!
- Add
singleNamespaceInstall
in values.yaml and add some comments. - Do you mind add step-by-step instructions about how to check this PR? Because we do not have any test case in this PR, I will clone this branch and test it manually.
If you want to update kuberay-apiserver
, I will ping Bytedance folks to look at this PR.
Thanks for the review @kevin85421
Here are the core test cases I used: Verify clusterrole[binding]s and no operator arg list by default# should have clusterrole[binding]s
helm template blah ./helm-chart/kuberay-operator --debug | grep ClusterRole # args should be empty
helm template blah ./helm-chart/kuberay-operator --debug | grep "kuberay/operator:nightly" -C 10 Verify no clusterrole[binding]s and watch namespace when singleNamespaceInstall# should have no clusterrole[binding]s
helm template blah ./helm-chart/kuberay-operator --debug --set singleNamespaceInstall=true | grep ClusterRole # should have a watchnamespace arg
helm template blah ./helm-chart/kuberay-operator --debug --set singleNamespaceInstall=true | grep "kuberay/operator:nightly" -C 10 Regression test: Verify batchScheduler.enabled flag still works# should have arg for batch scheduler
helm template blah ./helm-chart/kuberay-operator --debug --set batchScheduler.enabled=true | grep "kuberay/operator:nightly" -C 10 |
Thank you for your reply! It is better to test this PR via installing KubeRay operator and RayCluster. helm install kuberay-operator --set singleNamespaceInstall=true .
# Error: INSTALLATION FAILED: roles.rbac.authorization.k8s.io "kuberay-operator" already exists Without However, with |
thanks for checking @kevin85421 That bug only appeared when doing a fresh install! Perhaps we should have a helm CI test in future. I've addressed these issues and tested by installing kuberay and a raycluster in minikube then checking logs for errors. |
Would you mind checking the lint errors of the charts? You can follow this document to reproduce the lint errors on your laptop. |
linted, apologies -- not used to the devops cycle on this repo |
lol, neither are we :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I manually test this PR by the following three tests. Test 1 and test 2 are as expected, but I am not sure the expected behavior of Step 4 in Test 3? @alex-treebeard
In my understanding, watchNamespace
will overwrite singleNamespaceInstall
, but Step 4 seems listen to the default namespace.
Test 1: singleNamespaceInstall
# Step 1: Create a KinD cluster
kind create cluster
# Step 2: Create a new namespace
kubectl create ns test
# Step 3: Install a KubeRay operator in the default namespace.
# (Path: helm-chart/kuberay-operator)
helm install kuberay-operator --set singleNamespaceInstall=true .
# Step 4: Install a RayCluster in the test namespace
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm install raycluster kuberay/ray-cluster --version 0.4.0 -n test
# Step 5: Check the test namespace
kubectl get rayclusters.ray.io -n test # There is a RayCluster CR.
kubectl get pods -n test # No resources found in test namespace.
# Step 6: Install a RayCluster in the default namespace
helm install raycluster kuberay/ray-cluster --version 0.4.0
# Step 7: Check the default namespace
kubectl get rayclusters.ray.io # There is a RayCluster CR.
kubectl get pods # You will see 1 head Pod and 1 worker Pod.
Test 2: watchNamespace
# Step 1: Create a KinD cluster
kind create cluster
# Step 2: Create a new namespace
kubectl create ns test
# Step 3: Install a KubeRay operator in the default namespace.
# (Path: helm-chart/kuberay-operator)
helm install kuberay-operator --set watchNamespace=test .
# Step 4: Install a RayCluster in the default namespace
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm install raycluster kuberay/ray-cluster --version 0.4.0
# Step 5: Check the default namespace
kubectl get rayclusters.ray.io # There is a RayCluster CR.
kubectl get pods # No head Pod and worker Pod.
# Step 6: Install a RayCluster in the test namespace
helm install raycluster kuberay/ray-cluster --version 0.4.0 -n test
# Step 7: Check the test namespace
kubectl get rayclusters.ray.io -n test # There is a RayCluster CR.
kubectl get pods -n test # You will see 1 head Pod and 1 worker Pod.
Test 3: watchNamespace overwrites singleNamespaceInstall
# Step 1: Create a KinD cluster
kind create cluster
# Step 2: Create a new namespace
kubectl create ns test
# Step 3: Install a KubeRay operator in the default namespace.
# (Path: helm-chart/kuberay-operator)
helm install kuberay-operator --set watchNamespace=test,singleNamespaceInstall=true .
# Step 4: Install a RayCluster in the default namespace
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm install raycluster kuberay/ray-cluster --version 0.4.0
@Jeffwan @wilsonwang371 @scarlet25151 would you mind reviewing the update in the KubeRay APIServer chart? Thank you! |
i did a patch for single namespace kustomization deployment before. From what I can see here, the Role, RoleBinding are the parts that need to be updated. So far it looks ok to me. Need @scarlet25151 @Jeffwan to take a look. |
It is the opposite. In a singleNamespaceInstall we ignore watchNamespace because we assume there are no other permissions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the opposite. In a singleNamespaceInstall we ignore watchNamespace because we assume there are no other permissions.
This makes sense. We should add some comments to describe the relationship between singleNamespaceInstall
and watchNamespace
. Others look good to me.
Merge this PR. The CI failure "Compatibility Test - Nightly" has no relationship with this PR. See #852 (comment) for more details. |
This change allows Kuberay to be used in multitenanted clusters.
Why are these changes needed?
This change allows Kuberay to be used in multitenanted clusters.
Related issue number
Checks
I have tested this by running a series of template commands e.g.