New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
subvolumegroup: add support for size and datapool #14036
Conversation
@@ -32,6 +32,10 @@ spec: | |||
distributed: 1 # distributed=<0, 1> (disabled=0) | |||
# export: # export=<0-256> (disabled=-1) | |||
# random: # random=[0.0, 1.0](disabled=0.0) | |||
# size of the subvolume group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"size" is too vague, let's mention it's for a quota
# size of the subvolume group. | |
# Quota size of the subvolume group. |
@@ -32,6 +32,10 @@ spec: | |||
distributed: 1 # distributed=<0, 1> (disabled=0) | |||
# export: # export=<0-256> (disabled=-1) | |||
# random: # random=[0.0, 1.0](disabled=0.0) | |||
# size of the subvolume group. | |||
#size: 10G | |||
# data pool name for the subvolume group layout. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# data pool name for the subvolume group layout. | |
# data pool name for the subvolume group layout instead of the default data pool. |
* `distributed`: Range: <0, 1>, for disabling it set to 0 | ||
* `export`: Range: <0-256>, for disabling it set to -1 | ||
* `random`: Range: [0.0, 1.0], for disabling it set to 0.0 | ||
* `size`: Size of the Ceph Filesystem subvolume group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* `size`: Size of the Ceph Filesystem subvolume group. | |
* `size`: Quota size of the Ceph Filesystem subvolume group. |
|
||
* `dataPoolName`: The data pool name for the Ceph Filesystem subvolume group layout. | ||
|
||
* `pinning`: To distribute load across MDS ranks in predictable and stable ways. Reference: <https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not related to your changes, but let's use a link instead of raw URL.
* `pinning`: To distribute load across MDS ranks in predictable and stable ways. Reference: <https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups>. | |
* `pinning`: To distribute load across MDS ranks in predictable and stable ways. See the Ceph doc for [Pinning subvolume groups](https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups). |
* `random`: Range: [0.0, 1.0], for disabling it set to 0.0 | ||
* `size`: Size of the Ceph Filesystem subvolume group. | ||
|
||
* `dataPoolName`: The data pool name for the Ceph Filesystem subvolume group layout. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* `dataPoolName`: The data pool name for the Ceph Filesystem subvolume group layout. | |
* `dataPoolName`: The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired. |
args := []string{"fs", "subvolumegroup", "create", volName, groupName} | ||
if svgSpec != nil { | ||
if svgSpec.Size != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the svg already exists and the CR is edited with these new properties, will the svg be updated? Or will this create
command be skipped or not apply the new settings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will be updates as per the user requirement
if svgSpec != nil { | ||
if svgSpec.Size != nil { | ||
// convert the size to bytes as ceph expect the size in bytes | ||
args = append(args, fmt.Sprintf("--size=%d", svgSpec.Size.Value())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they edit the size in the CR, we need a different command to resize the svg, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we have only one command which takes care of everything we don't have resize
command in cephfs
a4f62b7
to
5c5b055
Compare
* `dataPoolName`: The data pool name for the subvolume group layout instead of the default data pool. | ||
|
||
* `pinning`: To distribute load across MDS ranks in predictable and stable ways. See the Ceph doc for [Pinning subvolume groups](https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups). | ||
* `distributed`: Range: <0, 1>, for disabling it set to 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did your editor autoformat to only have two leading spaces? IIRC the formatted docs will not have proper indentation if there are not at least three spaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, fixed it now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you push the fix? It looks the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somehow it got reverted but its present in latest push
daef154
to
fc76802
Compare
* `dataPoolName`: The data pool name for the subvolume group layout instead of the default data pool. | ||
|
||
* `pinning`: To distribute load across MDS ranks in predictable and stable ways. See the Ceph doc for [Pinning subvolume groups](https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups). | ||
* `distributed`: Range: <0, 1>, for disabling it set to 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you push the fix? It looks the same.
err = resizeCephFSSubVolumeGroup(context, clusterInfo, volName, groupName, svgSpec) | ||
if err != nil { | ||
return errors.Wrapf(err, "failed to create subvolume group %q in filesystem %q", groupName, volName) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it return here after a successful resize? Otherwise it will continue and call create, which would allow the shrink?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as the resize was successful create wont shrink and also we need to call create
to update the pool_layout (which is allowed in ceph fs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if the size was reduced by the user? Wouldn't the create
apply it even though the resize
ignored it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resize will fail if a user is trying to reduce the size to less than the size of the stored data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, thanks
198ca5d
to
4f1aa3a
Compare
This pull request has merge conflicts that must be resolved before it can be merged. @Madhu-1 please rebase it. https://rook.io/docs/rook/latest/Contributing/development-flow/#updating-your-fork |
cephfs subvolumegroup supports creating svg with quota and the datapool, This PR adds the support for the same. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
err = resizeCephFSSubVolumeGroup(context, clusterInfo, volName, groupName, svgSpec) | ||
if err != nil { | ||
return errors.Wrapf(err, "failed to create subvolume group %q in filesystem %q", groupName, volName) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, thanks
subvolumegroup: add support for size and datapool (backport #14036)
cephfs subvolumegroup supports creating svg with size and the datapool, This PR adds the support for the same.
Resolves #14004
Checklist: