Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow delete of un-mounted/un-mountable pool #2395 #2560

Conversation

phillxnet
Copy link
Member

@phillxnet phillxnet commented May 26, 2023

Fix CommandException handling to ignore report of already unmounted pool when attempting an unmount command.

Fixes #2395

Includes:

  • Improved user facing warning re Pool Delete dialog.
  • Minor comment typo.
  • Avoid filesystem interaction during pool delete when:
    -- pool is unmounted.
    -- pool has exceeded its raid level redundancy.
  • remove tangentially related exception handler on long
    removed code.

Delete Dialog improvements:

Before:
Delete_Pool_dialog_before_pr

After:
Delete_Pool_dialog_after_pr

Fix CommandException handling to ignore report of already
unmounted pool.
## Includes:
- Improved user facing warning re Pool Delete dialog.
- Avoid filesystem interaction during pool delete when:
-- pool is unmounted.
-- pool has exceeded its raid level redundancy.
-- remove tangentially related exception handler on long
   removed code.
@FroggyFlox FroggyFlox linked an issue May 26, 2023 that may be closed by this pull request
@phillxnet phillxnet added the needs review Ideally by prior rockstor-core contributor label May 27, 2023
@phillxnet
Copy link
Member Author

phillxnet commented May 27, 2023

@FroggyFlox & @Hooverdan96
The testing procedure I used here was to sabotage a multi device btrfs-raid single Pool. With and without shares. Before and after reboot post sabotage. Sabotage was via live device removal (virtio devices) within a QEMU VM. This way I could test our 'reaction' to a dive dying and thus killing a pool 'live': hence the addition of the pool.redundancy_exceeded clause to side step the inevitable issues of continuing to work with a 'beyond hope' pool when a Pool delete is our suggested course of action in that scenario (in the Maintenance required section).

"Built on ..." OS used was Tumbleweed to get newest btrfs to better handle missing drives. There is still a tendency to never drop a device however - hence a reboot is required prior to drive re-use (via wipe). But that is out-of-scope for this pull request.

@FroggyFlox
Copy link
Member

@phillxnet,

I created a Leap 15.4 VM and tried to follow what the original report was in #2395 (if I understood correctly):

  1. Create VM, with 2 data disks
  2. Create a Raid0 pool with these two disks
  3. Shutdown VM
  4. Remove one the disks
  5. Turn VM back ON
  6. The pool can't be mounted (normal):
[27/May/2023 15:19:14] DEBUG [system.osi:490] --- Inheriting base_root_disk info ---
[27/May/2023 15:19:15] ERROR [storageadmin.views.command:138] Exception while refreshing state for Pool(raid0-pool). Moving on: Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/raid0-pool /mnt2/raid0-pool -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: /mnt2/raid0-pool: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.', '']
[27/May/2023 15:19:15] ERROR [storageadmin.views.command:140] Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/raid0-pool /mnt2/raid0-pool -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: /mnt2/raid0-pool: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.', '']
Traceback (most recent call last):
  File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 128, in _refresh_pool_state
    mount_root(p)
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 761, in mount_root
    run_command(mnt_cmd)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 222, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/mount /dev/disk/by-label/raid0-pool /mnt2/raid0-pool -o ,compress=no. rc = 32. stdout = ['']. stderr = ['mount: /mnt2/raid0-pool: wrong fs type, bad option, bad superblock on /dev/sda, missing codepage or helper program, or other error.', '']
  1. The pool shows as "unmounted" in the UI
  2. Go to the pool's detail page and click on the "Delete" button in the top right
  3. I'm greeted with the updated dialog windows (really nice update, by the way)
  4. After confirming, the pool is no longer showing on the page

It all looks good to me (if I tested correctly, of course).

Thanks a lot for this fix!

@phillxnet phillxnet removed the needs review Ideally by prior rockstor-core contributor label May 27, 2023
@phillxnet phillxnet merged commit 8f06d99 into rockstor:testing May 29, 2023
@phillxnet phillxnet deleted the 2395_Allow_delete_of_un-mounted/un-mountable_pool branch May 29, 2023 19:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow delete of un-mounted/un-mountable pool
2 participants