Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory consumption with command line option #450

Open
fawick opened this issue Feb 15, 2016 · 13 comments
Open

Reduce memory consumption with command line option #450

fawick opened this issue Feb 15, 2016 · 13 comments

Comments

@fawick
Copy link
Member

@fawick fawick commented Feb 15, 2016

Hi @fd0! Is just finished watching the video from your recent talk that is linked in the blog and really came to like restic and the philosophy behind the tool and the development process.

At one point in the talk it is mentioned that restic currently allocates around 300MB of RAM for reading multiple files and is therefore not suited to run on, e.g., a Raspberry Pi. I wondered whether you would consider adding a command line option for manually reducing that number at runtime to allow for restic to be used on low-memory ARM devices. Next to the ubiquitous RasPi I was thinking of backing up Android and Sailfish devices.

@fd0
Copy link
Member

@fd0 fd0 commented Feb 15, 2016

Hey, thanks for your interest in restic. The number of around 300MiB of ram that I mentioned in the talk is used for two things (mainly):

  • calculating scrypt(password): the constants for scrypt are hard-coded right now, and there's #17 to add code that automatically figures out how hard scrypt should be for the current system
  • having multiple buffers of blobs/packs to backup in memory: This is tightly coupled with the maximum concurrency that is allowed. For this I'd also like to have auto-tuning code, but a command line option is also likely.

To summarize my points above: It's planned and will be implemented at some point in time.

@fawick
Copy link
Member Author

@fawick fawick commented Feb 15, 2016

Sounds good to me. I'd like to keep this issue open to track that feature, then.

@fd0
Copy link
Member

@fd0 fd0 commented Feb 15, 2016

Agreed.

@mika
Copy link

@mika mika commented Nov 7, 2017

I'd be interested in this topic also from a server perspective, so not necessarily low memory but "still not enough". :) For example I've got a mail server with ~950GB of data (scanned 64173 directories, 2147728 files in 2:41) and with 2GB of RAM I ran into OOM with restic (running under ionice -c 3 nice -n 19). Are there any known numbers for calculating memory needs for restic (per file/per GB/....)? (I ran into OOM with restic in all kind of different VMs, known those needs upfront would make life easier, and if there'd be any way how to reduce memory consumption that would be great.)

Thanks!

@dionorgua
Copy link

@dionorgua dionorgua commented Nov 8, 2017

My experience is around 4GB of RAM for 1.5TB repo size (for backup). prune takes even more (~9-10GB of RAM).

Memory usage depends mostly not on data size on particular machine, but mostly on repository size. So it is impossible to backup 100KB file on machine with 2GB of RAM (no swap) if restic repository itself is ~1TB.

@eliasp
Copy link

@eliasp eliasp commented Apr 17, 2018

Here:

  • Build:
    restic 0.8.3 (v0.8.3-11-gda77f4a2)
    compiled with go1.9.4 on linux/amd64
    
  • Dimensions (but I actually believe that the majority of those files - a lot of repository clones, metadata caches, build directories, etc. - are actually excluded and those numbers reported by restic reflect the total amount):
    118775 directories
    829113 files
    129.220 GiB
    
  • Memory usage: around 3.8G per daily run
@harrim4n
Copy link

@harrim4n harrim4n commented Dec 10, 2018

Is there any progress on this? Love restic, but my backups don't complete anymore due to memory exhaustion (1.5TB repository, uses ~23G RAM, b2 backend). Afaik, currently the concurrency can only be modified by modifying the source itself (#979 (comment)), but this isn't really maintainable.

@tcurdt
Copy link

@tcurdt tcurdt commented Aug 29, 2019

Has anything changed in terms of memory usage requirements?

@rouilj
Copy link

@rouilj rouilj commented Sep 21, 2019

I am also running into this issue. 2GB system 250GB restic repo. Running:

ionice -c 3 nice -n 19 ~/local/bin/restic -r /mnt/restic -p rk check --read-data-subset 3/7

(also using 4/14 to try to reduce memory use from 3/7) takes 1.7G. It can take 10 or 12 hours to complete the command because the poor system is swapping its brains out. Also the system is pretty much useless for anything else because of the swapping.

I like restic but this isn't sustainable.

@rawtaz
Copy link
Contributor

@rawtaz rawtaz commented Nov 21, 2019

Can someone clarify what the actual suggestion in this issue is? I mean, if the memory consumption/requirements can be made smaller, why should we not just do that and instead have a command line switch to enable using less memory?

@matthijskooijman
Copy link

@matthijskooijman matthijskooijman commented Nov 21, 2019

I assume that allocating less space, if possible, would mean the process is slower, so that would warrant not doing this by default?

@aawsome
Copy link
Contributor

@aawsome aawsome commented Jul 9, 2020

There have been improvements with respect to memory usage already and some others are in the pipeline.
In principle, I'm along the argumentation of @rawtaz that reducing memory usage will allow restic to be usable on a much wider field of devices.

That said, there are of course possibilites to further tune down memory usage, especially trading for speed. I started working on index-related possibilies in #2794

@MichaelEischer
Copy link
Member

@MichaelEischer MichaelEischer commented Oct 6, 2020

A proper solution for this will most likely use an on-disk repository index or allow clients to only load parts of the index as discussed in #1988.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet