Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container appdata from ZFS Snapshots is not found #7

Open
dpremy opened this issue Jan 1, 2024 · 4 comments
Open

Container appdata from ZFS Snapshots is not found #7

dpremy opened this issue Jan 1, 2024 · 4 comments

Comments

@dpremy
Copy link

dpremy commented Jan 1, 2024

Essentially, it seems the logic for finding the Internal appdata directories doesn't account for changes to the root path, say when using a ZFS snapshot.

I think this could be recreated without ZFS by copying appdata to any other location, and then using that copied location as the source of the backups in the plugin, but I've not verified this.

I suspect the issue is in isVolumeWithinAppdata and the str_starts_with. PHP isn't my forte, though I'll see if I can do some testing and get working code.

As an aside, a little additional debug logging in isVolumeWithinAppdata would help surface this, right before the if, and possibly an else to log unmatched volumes.

Recreating Issue

  • Create a ZFS Pool which hosts the appdata directory

    # zfs list pool-one/appdata
    NAME               USED  AVAIL     REFER  MOUNTPOINT
    pool-one/appdata   167G  1.15T      167G  /mnt/pool-one/appdata
  • Use a Pre-backup script to take a ZFS snapshot with a static name. In my case, I destroy any previous offline-backup snapshot, then create a new snapshot; keeping previous snapshots of this name are not needed.

    $ cat create-zfs-snapshot.sh
    #!/usr/bin/env bash
    
    zfs destroy pool-one/appdata@offline-backup
    zfs snapshot pool-one/appdata@offline-backup
    exit 0
  • Verify the snapshot was created

    # zfs list -t snapshot pool-one/appdata
    NAME                              USED  AVAIL     REFER  MOUNTPOINT
    pool-one/appdata@offline-backup   217M      -      167G  -
  • Confirm the snapshot can be accessed

    ls /mnt/user/appdata/.zfs/snapshot/offline-backup

  • Set the Appdata source in the plugin to this snapshot location of /mnt/user/appdata/.zfs/snapshot/offline-backup

All containers now have no Internal Volumes in the Per container settings, and reviewing a backup log you will find related messages

Logs

Standard

[01.01.2024 09:49:06][ℹ️][Gitea] Stopping Gitea... done! (took 1 seconds)
[01.01.2024 09:49:07][ℹ️][Main] Starting backup for containers
[01.01.2024 09:49:07][ℹ️][Gitea] Should NOT backup external volumes, sanitizing them...
[01.01.2024 09:49:07][ℹ️][Gitea] Gitea does not have any volume to back up! Skipping

Debug

[01.01.2024 09:49:06][▒.▒▒.][MGitea] Stopping Gitea...  done! (took 1 seconds)
[01.01.2024 09:49:07][▒.▒▒.][Main] BStarting backup for containers
[01.01.2024 09:49:07][debug][Gitea] Backup Gitea - Container Volumeinfo: Array
(
    [0] => /mnt/user/appdata/gitea:/data:rw
)

[01.01.2024 09:49:07][debug][Gitea] usorted volumes: Array
(
    [0] => /mnt/user/appdata/gitea
)

[01.01.2024 09:49:07][▒.▒▒.][Gitea] Should NOT backup external volumes, sanitizing them...
[01.01.2024 09:49:07][▒.▒▒.][Gitea] Gitea does not have any volume to back up! Skipping
@Commifreak
Copy link
Owner

Yes, setting the path to a snapshot will not change anything thats stored inside the container template. Those paths are still the "normal" ones.

I dont know if the plugin should handle this or if another solution for backing up zfs snaps is better.

@dpremy
Copy link
Author

dpremy commented Jan 2, 2024

I understand the hesitation. ZFS is newer to unRAID, likely a small subset of users need this, and if you aren't careful, it could expand in scope quickly. While this could be reproduced with a copy/sync of some sort, via a pre-backup script, that too seems like a rare edge case.

On the flip side, supporting something like this could take the time containers are stopped from hours/minutes, to under 10 seconds. Advanced usage, yes, but significant returns for larger containers.

In my use case I do have a container which is a few gb, and backups take nearly an hour. With current options in the plugin, the container is down during backup. If this snapshot method is supported for Internal and External volumes, I could have the container restarted in a matter of seconds, and then just backup the volumes as time permits.

I believe this could be supported with a few changes:

  1. In Backup Types add an option to 'Stop all containers, pre-backup script, start containers, backup', with a comment that this requires a script to snapshot, copy or sync the data before backup.
  2. If this option is selected, add an Appdata Offline Copy field
  3. String replace Appdata Sources and Appdata Offline Copy in isVolumeWithinAppdata and the backup stage

@Commifreak
Copy link
Owner

Maybe a simple native zfs snapshot support would be sufficient? The plugin could stop all containers, create a snapshot, start all containers and doing its backup by adjusting the paths.

@Commifreak
Copy link
Owner

FTR:

So, this ticket is about ZFS snapshot support, if docker uses a zfs storage.

The plugin neds to:

  • Stop every container
  • Create snapshot
  • Start containers
  • Backup the just created snapshot
  • Done

Right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants