-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Deduplicate files on archives #704
Comments
(mini question why archivebox 0.6 is not on pipy repo actualy ? |
I'm in the process fixing an issue with the auto build worker by moving it to github actions, but I think if it takes me any more than 1 day to fix it I'll just roll the release by hand 😓 |
Arf, thank for your answer. Good luck! |
Related issues to the content hashing / merkel tree / deduping process: |
Oups, I don't find theses file. |
I read all man of rdfind and I think is a good solution with subprocess to make deduplication without lot of work :
|
It's a good solution, but I think I'd rather have users manage that process themselves for now than build it into archivebox. Hardlinks/symlinks are not well supported on all platforms and filesystems, and many people use ArchiveBox on a weird filesystems (docker overlayFS, NFS, FUSE, network mounts, windows file shares, etc.) that don't even support FSYNC, let alone hard links. Also the more "special" the setup is and the farther away from a flat folder structure it is, the more likely it is to break over time as file systems and specifications change, which defeats the purpose of having a long-term durable archive. |
Arf, I see... :/ Big and complex problem :/ |
Note I've added a new DB/filesystem troubleshooting area to the wiki that may help people arriving here from Google: https://github.com/ArchiveBox/ArchiveBox/wiki/Upgrading-or-Merging-Archives#database-troubleshooting Contributions/suggestions welcome there. |
Hello :)
Type
What is the problem that your feature request solves
When archiving a lot of pages, it is possible that some files remain identical between each of these pages. The problem is that these files take up more and more space even though they are still identical.
There are solutions on the file system side (ZFS for example) but on the application side it is more complex.
I'm thinking of using Rdfind and coupling it to a script to transform duplicate links into hardlinks.
I've been thinking of using rdfind and finding all the files to make a hardlink, that way you delete the original page, you don't lose the other files. But I'm afraid to make archivebox crazy in the future with my tricks ^^
Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes
I think use link (hard or symlink) store in a "global folder" and each archive use theses files. The duplicate file have the same md5 hash and each hash are stored in DB to find quickly without many IO duplicate file
What hacks or alternative solutions have you tried to solve the problem?
Not tried but I think use rdfind to find or hash each file.
How badly do you want this new feature?
(Yes both, is it a nice to have but my disk space say it's important ^^)
The text was updated successfully, but these errors were encountered: