-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extremely inefficient way of handling sparse files #70
Comments
After excluding the following files, backup times are back to normal:
|
You are right, there is currently no support for sparse files. fsarchiver just performs standard read operations on the fly and will consider all logical bytes as actual data. Unfortunately I don't think there is any easy solution to add support for sparse files. It would probably require a change in the file format and a lot of code changes. |
This seems to contradict: https://forums.fsarchiver.org/viewtopic.php?f=17&t=1009&p=2971&hilit=sparse#p2971 Which states: When fsarchiver reads a sparse file from the disk, it's just a normal file where empty parts are just zero bytes. As a consequence the compression will be extremely good for all these parts (something like 99% I guess). Fsarchiver also checks the space used by the file on the disk to know whether or not it's a sparse file, and then it sets a flag in the archive. When you restfs, it will recreate the file as a sparse file if the flag is set, else it will just be a normal file with a lot of zero bytes inside. At the end, the archive will be very small anyway since big blocks with zero bytes have an excellent ratio. The above is dated: Sat Feb 21, 2004 12:12 pm for 0.6.7 (2010-01-31):
What gives? Thank you, |
Todays backup of a 30G Linux partition with ~15G of data took several hours (actually, I aborted it after several hours). In the past, this process has been rather quick (matter of minutes).
Running
fsarchiver -v
shows the following:/var/log/lastlog
is so-called a sparse file, its actual size isLooks like fsarchiver tries to process 1TB of data.
https://linux.die.net/man/8/lastlog
The text was updated successfully, but these errors were encountered: