-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A user (with privileges) should be able to chose a workspace cluster explicitly #4809
Comments
Other simple solution: extend the admission constraints to explicitly name users, i.e. introduce a Pro: does not require plumbing cluster choice to workspace context/config.
This case I don't understand yet. Why can't we just raise the score while testing the cluster? |
it pins you to single cluster :( Sometimes it's nice to work with multiple ones: First start your "dev" workspace on a "stable" cluster and then start a test workspace on the k3s-cluster, another one on EKS, and another one on GKE just to make the comparison. With that being said, it feels like even with a ration of 10000:50 I still end up on the "50" cluster 1/4 of the time, so I ended up starting multiple workspace in the past. Another nice use-case for having the cluster-name in the URL: We can share such URLs in Slack "here, try this cluster". |
That sounds like a separate bug we should look into
That's a great use-case indeed |
/schedule |
In conversation with @meysholdt we've decided to skip this feature for now. Removing from groundwork. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Use cases: If there are multiple pluggable workspace clusters...
Proposal:
&cluster=eu12
argument in the context urls, similar to how we do it to trigger prebuilds manually and pass ENV vars.new-workspace-cluster
permission. We don't want to give users the ability to override our loadbalancing/failover/cluster-cycling strategies.Context: @geropl brought this up in today's all-hands.
The text was updated successfully, but these errors were encountered: