New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory usage is too high #16
Comments
As long as this issue exists, this needs to be mentioned somewhere very public, e.g. the landing page. |
Ok, I have to rephrase. I tested on a 4GB Ram machine with 20GB swap. Backup fails after 4 TB of data. This makes attic/borg unuseable in a mature production environment. 4GB Ram is an edge case but backup must not fail, never. |
You could add this to attic issue 302 (attic and borg are not yet different in that aspect). |
I wanted to note that I am working on improving this by making the chunker params variable by using a commandline option. Attic (and currently also Borg) creates lots of rather small chunks of ~64kiB (this is because the rolling hash mask is 16bits: if last 16bits of the hash are zero, a chunk is cut - statistic average is every 64kiB). This provides fine granularity deduplication, but creates a high management overhead (esp. RAM and disk space used for indexes). By using e.g. 20 mask bits, the chunk size would be ~1MiB, lowering disk space and RAM needs to 1/16 in the best case. Note that a (small) file always creates at least 1 chunk, so there can still be lot of small chunks if you have a lot of small files. Also, this causes a bigger granularity for deduplication. So, bigger chunks have pros and cons, but at least they make Borg a usable backup software for people with a lot of backup data or with relatively little RAM amounts. See also there, one reason the author had to take obnam was because of above mentioned issue: (I personally tried obnam before attic, but it was way too slow for my taste, esp. with encryption.) |
btw, see also 4633931. |
Will try asap |
see jborg/attic#302 and jborg/attic#41.
The text was updated successfully, but these errors were encountered: