-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm: allow defining multiple cephEcStorageClass #13405
helm: allow defining multiple cephEcStorageClass #13405
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the following:
- PR and commit description to explain why the change is needed
- If someone had defined the EC storage class previously, will upgrades be affected?
- Since this scenario is not tested in the CI, please comment on the scenarios you could test.
947ee26
to
b048e09
Compare
b048e09
to
fde3309
Compare
0908c23
to
2762224
Compare
Sorry that I forgot about my opened issue completely for a long time! This PR got lost in my GitHub notifications, and I just noticed it today. Some questions from me (taken from my issue description):
|
4718883
to
6305730
Compare
I see @Javlopez added this now
This change would also require other changes to the structure to define the |
Honestly I don't really grok Helm myself, but others on my team do. If this means formatting the config file / chart slightly differently when updating to a Rook release, I think that's manageable. Especially if it adds functionality we don't have today. What I'm most wary of is anything that could result in Rook perturbing existing Ceph resources - we're currently testing to ensure that the recent change that lets Rook modify CRUSH rules will wreak any havoc on our clusters. And of course I look forward to Rook configuring multiple RGW storageclasses / placement targets so I don't have to do it by hand - and then worry that Rook will scribble over it. |
6305730
to
7c07497
Compare
My Helm upgrade workflow always involves rendering the actual manifests, and then checking diff against the current objects. It's usually a daunting task if refactoring is required (in this case changing the value to keep the rendered manifests the same), but not unmanageable. This is just a casual Helm problem. I'm expecting this change and aware of it, so we already have a plan to handle the upgrade process. |
Sounds like the benefit of the change is worth the update. @Javlopez Want to go ahead with those additional changes? |
837e51d
to
1960a98
Compare
This pull request has merge conflicts that must be resolved before it can be merged. @Javlopez please rebase it. https://rook.io/docs/rook/latest/Contributing/development-flow/#updating-your-fork |
This update modifies the Helm chart rook-ceph-cluster to allow define cephStorageClass as part of the blokpool also the documentation was updated for cephECBlockPool definition Signed-off-by: Javier <sjavierlopez@gmail.com>
1960a98
to
95ae42c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@folliehiyuki @anthonyeleven Might you be able to test this to see if there is any additional feedback before we get it into a release?
@travisn Potentially. We have a lab cluster and my protege has been working with storageclasses to ensure that the manual work we did to correct the .mgr pool and to retrofit deviceclass-constraints into CRUSH rules won't be trampled by the more recent Rook release that changes CRUSH rules. We'd need to know where to find the Rook artifacts necessary. |
It may be simplest to checkout this branch or else the master branch after this is merged, then use the local directory to install the chart, as we use during development. Thanks |
@travisn, I work with @anthonyeleven, and we are currently testing pool changes from v1.12.8 in our staging cluster. Once confirmed, I will proceed with upgrading to the latest version of Rook, v1.13.3, and then test with the current fork for defining multiple cephEcStorageClasses. This might happen towards the end of this week. |
Thanks, we will wait for confirmation before backporting this to the 1.13 release branch. |
This Pull Request enables the definition of a list of ecBlockPools, each with its own storageClass. Prior to this change, it was necessary to define separate storage classes. Now, we can specify all the requirements for ecBlockPools directly under each ecBlockPool definition
Testing
Since this scenario is not covered by CI, you can test by yourself using minikube with at least 5 nodes
minikube start --disk-size=20g --extra-disks=1 --driver kvm2 --nodes 5
And use this configuration in values.yaml
Which issue is resolved by this Pull Request:
Resolves #12772
Checklist: