New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce mem usage #43
Reduce mem usage #43
Conversation
Storing the output from 'curl' commands directly as shell variables is very inefficent, and requires much more RAM gravity.sh any time there is an update to the block lists (and especially on the first run). Store the raw blocklists in a temporary file on disk, and process those.
Remove extraneous calls to several programs (cat, uniq).
Initially, we switched to putting them in a variable ( What do you mean by dependency processing? |
By "dependency processing", I mean something like a makefile: you have outputs that require various actions and other dependencies. Storing the data all in RAM (in bash variables), thus forcing a host to use additional swapfiles, in order to save writes to an SD card seems backwards. |
I think some of your memory usage stats may be low. The sorting step alone of supernova uses 134168 kB on my system. That said, I agree we should have a goal to get away from storing Here are my stats on sorting using
|
I haven't forgotten about this. Just waiting for some more time that I have to test it. It looks pretty good though from my initial scan. |
Your commits are on the right track, but there are some issues:
|
|
Did you want to make the changes? |
I have some time this weekend if you wanted to make the changes. |
I'd be happy to make a PR with the changes you requested if this issue is still open for consideration. |
Yes, it is. Thanks! |
Okay, PR #68 opened with @jacobsalmela changes for @hawson memory reductions. |
Cool. I'll close this one then. Thanks! |
This pull aims to reduce the RAM usage needed by pihole when parsing new/updated block lists.
The files are downloaded locally and then operated on, instead of being stored as (very large) variables in the shell script. Some simplistic tests (query /proc//status for RSS size) show that the code in master uses ~345,000KB when running (that's the high water mark). When processing locally, RAM usage in this case is somewhere around 8,600KB
It's important to note that the files are downloaded only if the upstream copies are newer, which should address some of the concerns mentioned in #37. That said, if SD card IO is really that much of a concern, there should be full dependency processing to eliminate unnecessary writes.