Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: expand pool after extending underlying md device #50

Closed
ingothierack opened this issue Jul 7, 2019 · 15 comments
Closed

Question: expand pool after extending underlying md device #50

ingothierack opened this issue Jul 7, 2019 · 15 comments
Assignees

Comments

@ingothierack
Copy link

ingothierack commented Jul 7, 2019

Is it possible to expand a pool if the underlaying device has changed?

I have added a new disk to an mdraid to expand it.
device md127 was expanded to 10.9 TB but this is currently not reflected in the stratis pool

  └─md127                                                                                         9:127  0 10.9T  0 raid5   
    └─stratis-1-private-210eb8dac6cb440993454dac6a885dae-physical-originsub                     253:2    0  7.3T  0 stratis 
      ├─stratis-1-private-210eb8dac6cb440993454dac6a885dae-flex-thinmeta                        253:3    0  7.4G  0 stratis 
      │ └─stratis-1-private-210eb8dac6cb440993454dac6a885dae-thinpool-pool                      253:5    0  7.3T  0 stratis 

Is there a way to make the additional space visible to stratis?

@ingothierack ingothierack changed the title expand pool after extending underlying md device Question: expand pool after extending underlying md device Jul 7, 2019
@mulkieran
Copy link
Member

I'm afraid not yet. The plan is to allow the user to tell Stratis to accept and use the additional space. There is a previous issue, but it is not terribly specific.

The old idea was to have a command to tell stratisd that it should use all the new space available on the blockdev that it can find. It seems reasonable to require specifying the blockdev that has been enlarged. However, the idea was that the engine would discover the new size rather than that the user should specify it.

@ingothierack
Copy link
Author

By any chance, even to do it now manually(edit some files or something likes this), to tell stratis it has more space available?

Had the mdraid expanding done a few times in the past on plain lvm based setup. That was one of the main reasons, to go this way, and not in the ZFS or btrfs direktions for raid handling. So i thougth, i had more flexibility with stratis in this way.

If there is currently no way, i had to go with new disks and create a new pool, than copy over the data.

@mulkieran
Copy link
Member

Migrating to project, since this will involve the CLI as well.

@mulkieran mulkieran transferred this issue from stratis-storage/stratisd Jul 9, 2019
@mulkieran
Copy link
Member

By any chance, even to do it now manually(edit some files or something likes this), to tell stratis it has more space available?

Had the mdraid expanding done a few times in the past on plain lvm based setup. That was one of the main reasons, to go this way, and not in the ZFS or btrfs direktions for raid handling. So i thougth, i had more flexibility with stratis in this way.

If there is currently no way, i had to go with new disks and create a new pool, than copy over the data.

There should be a way to edit the metadata that should work, at least in your constrained situation, with only one device. I expect to be able to take a closer look on Thursday.

@mulkieran
Copy link
Member

@mulkieran ping

@mulkieran
Copy link
Member

mulkieran commented Jul 23, 2019

Just a preliminary note: The size of the device when it was claimed is part of the signature buffer data. If the new size is less than the old size, there is an error on setup. If the new size is greater, the extra space is just ignored. The important part is still the variable length metadata. Note that this is trickier to alter than is the filesystem metadata, as it is packaged into the Stratis metadata written directly to your device.

@ingothierack
Copy link
Author

question about this. By any chance, to see here within the next weeks any improvement? Even as spoken before, a manual process would be ok, until a final implementation is finished.

@mulkieran
Copy link
Member

@ingothierack Thanks for the ping. I understand about the manual process, and I hope I'll be able to come up with something soon, but I'm not at all certain that this will happen.

@ingothierack
Copy link
Author

Ping on this feature/Issue.
My storage runs slowly down :(

Maybe some progress on this for october?

@ingothierack
Copy link
Author

Any news on this topic? In the current stage this means, i had to replace the current pool with a new one with bigger drives.

@mulkieran
Copy link
Member

Now blocked by start/stop pools work.

@jbaublitz
Copy link
Member

@ingothierack I apologize for the delay. If you take a look at stratis-storage/stratisd#3035, development work has been started!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants