Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for EC Block Pool configurations to be defined in the rook-ceph-cluster Helm chart #9179

Closed
Omar007 opened this issue Nov 15, 2021 · 9 comments · Fixed by #12324
Closed
Assignees
Labels

Comments

@Omar007
Copy link
Contributor

Omar007 commented Nov 15, 2021

Is this a bug report or feature request?

  • Feature Request

What should the feature do:
The rook-ceph-cluster Helm chart should be expanded to allow the cephBlockPools field to define erasure coded data pools.

What is use case behind this feature:
Ceph supports the use of erasure coded pools as the data pool for RBD / Block pools but when using the chart to define your pools this functionality can not currently be used.

@Omar007
Copy link
Contributor Author

Omar007 commented Nov 15, 2021

There might be a work-around possible in the chart by defining both pools as separate entities/array entries (one replica metadata pool and one EC data pool), explicitly disabling the storage class on the EC pool and extending the storage class configuration on the replica pool to set the parameter dataPool: <ec-pool-name>.

It'd still be nicer for this to be unified and be handled similarly across the different pool types instead of having to work around it in this manner.

@travisn
Copy link
Member

travisn commented Nov 15, 2021

There might be a work-around possible in the chart by defining both pools as separate entities/array entries (one replica metadata pool and one EC data pool), explicitly disabling the storage class on the EC pool and extending the storage class configuration on the replica pool to set the parameter dataPool: .

I would vote for this approach, it keeps the helm chart simplest.

It'd still be nicer for this to be unified and be handled similarly across the different pool types instead of having to work around it in this manner.

So you'd have to define a whole new type such as cephECBlockPools in values.yaml, and another template similar to cephblockpools.yaml. It's not that complicated, but it does look very duplicative of the existing cephBlockPools. The added complexity for a less common case doesn't seem worth it to me, but I'm not against this approach if you want to open a PR for it.

@Omar007
Copy link
Contributor Author

Omar007 commented Nov 16, 2021

I would vote for this approach, it keeps the helm chart simplest.

Hmm that leaves the question of what the goal is for providing the Helm chart. Usually you'd want to use charts to simplify deployments and configuration and abstract away those manual details and complexity that do not add anything else other than complexity.

If you need to do/define all the same things as when specifying whole spec file directly anyway, or in this case even start thinking about the configuration of two storageClass configurations in two array entries where normally you'd just define one, you might as well just do that instead of using a chart while also saving yourself the trouble of additional busywork and extra things to think about 🤔

So you'd have to define a whole new type such as cephECBlockPools in values.yaml, and another template similar to cephblockpools.yaml. It's not that complicated, but it does look very duplicative of the existing cephBlockPools. The added complexity for a less common case doesn't seem worth it to me, but I'm not against this approach if you want to open a PR for it.

That would not be needed and is also not desired imo. It should be easier and more straight forward to use the chart, not more complicated and add oddities you'll have to think about you otherwise wouldn't.

As the minimum improvement I was thinking along the lines of making the definition style within the chart more similar to the other pool definitions; e.g. a metadata and a singular dataPool subfield, where metadata would be optional if dataPool is not EC, which is also easily enforceable/checkable by Helm if you want to.

I can probably try my hand at it at some point and also see if I can push it a little bit further than just that minimal change. When I tried to define other pools I also had to insert/provide a lot of information that should probably just be handled by the chart as it's the same regardless of the pool definition (e.g. the csi.storage.k8s.io/... properties that really just point to the known secret names in the operatorNamespace namespace).

@travisn
Copy link
Member

travisn commented Nov 16, 2021

Agreed that the helm chart really should be simplifying the scenarios. As far as implementation, that does sound better than what I mentioned previously, and other improvements you mentioned sound good. Looking forward to a PR! :)

Omar007 added a commit to Omar007/rook that referenced this issue Nov 17, 2021
The current version of the Helm chart requires a lot of manual
repetition work and does not simplify the pool definitions over
just manually defining pools in a spec file.
This change unifies the pool values.yaml layout and abstracts
away additional flags/options that can be inferred from the
how the operator deployment sets up its environment.

Closes: rook#9179
Signed-off-by: Omar Pakker <Omar007@users.noreply.github.com>
Omar007 added a commit to Omar007/rook that referenced this issue Nov 17, 2021
The current version of the Helm chart requires a lot of manual
repetition work and does not simplify the pool definitions over
just manually defining pools in a spec file.
This change unifies the pool values.yaml layout and abstracts
away additional flags/options that can be inferred from the
how the operator deployment sets up its environment.

Closes: rook#9179
Signed-off-by: Omar Pakker <Omar007@users.noreply.github.com>
Omar007 added a commit to Omar007/rook that referenced this issue Nov 17, 2021
The current version of the Helm chart requires a lot of manual
repetition work and does not simplify the pool definitions over
just manually defining pools in a spec file.
This change unifies the pool values.yaml layout and abstracts
away additional flags/options that can be inferred from the
how the operator deployment sets up its environment.

Closes: rook#9179
Signed-off-by: Omar Pakker <Omar007@users.noreply.github.com>
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

@github-actions
Copy link

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

@MerouaneBen
Copy link

can we have this PR in the next release, it is very useful for helm chat users. thanks

@travisn travisn reopened this May 24, 2023
@github-actions github-actions bot removed the wontfix label May 24, 2023
@Javlopez Javlopez self-assigned this May 25, 2023
@travisn
Copy link
Member

travisn commented May 31, 2023

@Javlopez Thanks for looking at this feature. Here are some thoughts on the changes needed... See the storageclass-ec.yaml example, which contains:

  • A CephBlockPool CR for the replicated metadata
  • A CephBlockPool CR for the erasure coded data
  • A storage class that specifies the dataPool and pool properties for those two pools

The cluster helm chart has a values.yaml that defines the cephBlockPools, with the implementation found in templates/cephblockpool.yaml. I'm thinking we need a new cephECBlockPools section in values.yaml, and then create a new template file to implement it. By default we wouldn't want any EC block pools to be created with the chart, so it would be commented out in values.yaml.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
4 participants