-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
As an Admin, I want to have the possibility to configure a number of replicas for operands from the CR #22067
Comments
Thanks @ibuziuk for opening this. I would rather add it under deployment as replica is a deployment spec: spec:
components:
cheServer:
deployment:
replicas: 1
dashboard:
deployment:
replicas: 1
devfileRegistry:
deployment:
replicas: 1
pluginRegistry:
deployment:
replicas: 1 |
thank you for the review 👍 updated description |
Issues go stale after Mark the issue as fresh with If this issue is safe to close now please do so. Moderators: Add |
why is this closed ? has this been already addressed in a recent release? |
This issue is in the project backlog and we are currently working on supporting number of replicas for operands |
This issue has been solved in a different way. This PR [1] prevents che-operator to reset the number of replicas on operands update. [1] eclipse-che/che-operator#1804 |
@deerskindoll could you please review - eclipse-che/che-docs#2676 |
blocked by Vale issue, docs PR can not be merged - https://issues.redhat.com/browse/RHDEVDOCS-5906 |
Is your task related to a problem? Please describe
After the removal of dependency on database, it is now possible to run multiple che-servers (and other operands) in parallel. As an Admin, I want to have the possibility to configure a number of replicas for operands from the CR.
Describe the solution you'd like
replicas
CR property for each operand e.g.By default, we can opt for 1, but admin can update the CR accordingly with explicit number of replicas for each operand
Describe alternatives you've considered
N / A
Additional context
Related to #7662
The text was updated successfully, but these errors were encountered: