Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental size way bigger than expected after fstrim #138

Closed
Zoltan-LS opened this issue Oct 18, 2023 · 4 comments
Closed

Incremental size way bigger than expected after fstrim #138

Zoltan-LS opened this issue Oct 18, 2023 · 4 comments
Labels
documentation Improvements or additions to documentation

Comments

@Zoltan-LS
Copy link

Version used
0.99

Describe the bug
I am unsure if it is a normal behavior or if this is specific to kvm. The problem that I am seeing is that the incremental backups become 2-3 times the size of the disk after fstrim is executed.

-rw-r--r-- 1 root root 13G Oct 3 01:12 sda.full.data
-rw-r--r-- 1 root root 168M Oct 3 05:15 sda.inc.virtnbdbackup.1.data
-rw-r--r-- 1 root root 131M Oct 3 09:15 sda.inc.virtnbdbackup.2.data
-rw-r--r-- 1 root root 75M Oct 3 13:15 sda.inc.virtnbdbackup.3.data
-rw-r--r-- 1 root root 72M Oct 3 17:15 sda.inc.virtnbdbackup.4.data
-rw-r--r-- 1 root root 72M Oct 3 21:15 sda.inc.virtnbdbackup.5.data
-rw-r--r-- 1 root root 38G Oct 4 01:21 sda.inc.virtnbdbackup.6.data
-rw-r--r-- 1 root root 174M Oct 4 05:21 sda.inc.virtnbdbackup.7.data

Expected behavior
I'd expect/like to see this smaller in size. Is there any work around for this?

Hypervisor information:

  • OS: Alma
  • HV type plain libvirt

Logfiles:
Dont see any relevant logs to provide.

Workaround:
Work around is to execute the fstrim before the full backup

@Zoltan-LS Zoltan-LS added the bug Something isn't working label Oct 18, 2023
@abbbi
Copy link
Owner

abbbi commented Oct 18, 2023

The Backup operates on blocklevel and i think fstrim results in changed blocks. So qemu marks these blocks as dirty and they end up in the Backup (as they are Part of the qcow bitmap as in marked „dirty“). I dont know if there is a way to Tell qemu to „discard“ changes done by fstrim. I dont think i can change anything in virtnbdbackup to behave different.

skipping blocks marked as dirty is Not an option.. results in unusable disks After restore.

Maybe libvirt/qemu projects have some documentation on that.

@abbbi abbbi added documentation Improvements or additions to documentation and removed bug Something isn't working labels Oct 18, 2023
@Zoltan-LS
Copy link
Author

Fair enough. I thought that may be the case here, it was just a slight hope that maybe you encountered this and found a fix for such cases.

@abbbi
Copy link
Owner

abbbi commented Oct 18, 2023

I dont have a Solution (Other than timing fstrim with full backups) .. Or check libvirt/qemu docs for options that might help
I see you still use verison 0.99: you should consider updating, the project has been seen various updates since this version.

@abbbi
Copy link
Owner

abbbi commented Oct 18, 2023

other solutions Based on dirty bitmaps have the same „Issue“:

https://forum.proxmox.com/threads/huge-dirty-bitmap-after-sunday.110233/

So id say it works as designed. Maybe running fstrim on the host makes more Sense.

@abbbi abbbi closed this as completed Oct 18, 2023
Repository owner locked and limited conversation to collaborators Oct 19, 2023
@abbbi abbbi converted this issue into discussion #139 Oct 19, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants