New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
addressing monerod "Killed" / OOM #2137
Comments
I see you're on a pine64. I've been running git rev e3da0ca for about a month on a Beelink GT1. No special measures needed. |
Interesting! I am, indeed, on a pine64. I'll compile from source and report back. Does that patch reduce memory usage somehow? |
Mainly it includes b52abd1 which definitely reduces RAM use. |
Very cool, thanks for the information. I've finished building from source and am now running without any extra flags. I'll report peak RAM usage from my logs after letting it run awhile. Thanks @hyc! |
My build from master at a0b494a did not last very long...
|
On the next run, it segfaulted. Building with debug symbols now and getting a traceback with gdb. Out of the OOM frying pan and into the segfault fire! |
disregard the last version of this comment where I erroneously stated I was still seeing OOM. monerod is still syncing (3 days uptime, new record on the pine64) with e3da0ca. Gotten all the way to 689001/1348576 blocks. It's using 80+% of the 2GB of RAM and maxing out the four cores. It's kind of a shame monerod doesn't play well with smaller computers, but such is the price. Closing this issue as OOM issues seem to have been largely alleviated since the last release. Thanks @hyc! |
Yeah that's definitely odd. I guess you should try what others have suggested, and just use --block-sync-size 20 (or 10, if needed). |
(it was my other pine64 with the latest stable release that was still ooming, not surprising. ssh'd into wrong box) |
I am having a similar issue, but I am not sure if it is related or not. The box has 16GB of RAM, yet
Running few
Finally I manually killed the process to get
Thanks. |
@renecannao Hey, I know that hosting provider! Love them! But what architecture are you running? I think they offer both ARM and x86 in up to 16GB. I've used the same release on x86 with 8GB without issues. |
Good provider indeed! :)
|
I'm running x86_64 - Debian 8.7. I somehow doubt the small lib bumps from 8.7 to 8.9 matter. I checked and we have roughly the same set of CPU extensions. Something low-fi might be to |
Thank you for the advise, but somehow I would like to exclude it is related to syncing, as the node is already in sync. |
You're still running vanilla v0.10.3.1? Memory use in current git master is vastly different already, so there's very little to be gained by looking at 0.10.31. now. I looked through your heaptrack output but it seems to me that it didn't identify any actual leaks, only overall allocations. If you can build your own binaries, try compiling current git master. And if you're still seeing OOM issues, also attach the relevant dmesg output - the kernel will log when OOM killer selects a process for termination. On my VPS with 4GB RAM, monerod only uses 2.6GB RES, of which 1.77GB is shared memory (and therefore unimportant for RAM usage calculations.) My mleak tracer is faster and smaller footprint than heaptrack. https://github.com/hyc/mleak |
Yes, I was sill running vanilla v0.10.3.1 .
I also investigated a bit the big virtual memory, and it seems related to an
I wonder why it calls Conclusion: Thanks |
The mmap call is normal for LMDB. The size of the mapped region is somewhat irrelevant; on a 64bit OS you currently have a 256TB address space so this is trivial. |
@hyc : thank you for the reply. Some more context about this question.
While
Basically, LMDB doc is very clear in saying that this is not an issue. To be clear, I consider this not a bug, but a feature request. |
No. The amount of RAM used by LMDB is irrelevant - it all resides in the OS page cache and the OS will reclaim it for other purposes whenever any other process asks for memory. As for your latter concern - the memory used by LMDB is shared memory. Private memory is something else entirely, and any leaks you're looking for would be reflected there. If the memory profile you included here is to be believed, with a process using 0kb of shared memory, it would imply that all of LMDB's data has already been paged out, and all of monerod's memory use is due to regular allocations. I'm not sure I believe that, but it's always possible. Try getting the same output shortly after startup. edit: I see, these are the stats specifically for the data.mdb file. Still it's odd that it's reported as Private instead of Shared. |
just in case, if wasn't mentioned here - swap is 0 - so all applications which widely expand in ram - will be killed once they reach (total - ~100 mb) RAM edge |
I've been having a similar problem on webfaction. Webfaction is a shared host, that provides each user with 1GB RAM (and, supposedly, monerod can run in under 1GB of RAM). When I run monerod, it gradually eats up RAM until it exceeds 1GB, at which point, webfaction kills off the process. I don't know to communicate to monerod (or the underlying lmdb) that it must stay within 1GB. I've tried ulimit, but that didn't accomplish anything. (ulimit -v makes it so monerod won't even run, and ulimit -m has no effect). I gave up, waiting for the new release, because everyone here said the new release would be much more memory efficient. 0.11.0 was just released today, and I immediately tried it out --- same problem. How do I get monerod to run on a VPS with 1GB of RAM? |
Even once monerod itself is fully up-to-date with the p2p network, monerod uses gobs of memory very quickly whenever a wallet connects to it that is behind a large number of blocks and the wallet needs to synchronize all those blocks with the server. |
@Engelberg for now you can use flag What happens is - default amount of blocks to download and keep in memory is 200 - which is very fine for early blocks (before 1 200 000 block) - it allows to quickly download a lot of small blocks and efficiently process them. But when blocks become huge (because of growing popularity at 1,2M blocks point) - 200 blocks require a lot of RAM as well as time to be downloaded. |
@garmoshka-mo It appears like the new default for block-sync-size is 20 in the latest version, but yes, I found that adding But now, I'm finding that when a wallet connects to the fully synchronized node, and the wallet is behind and needs to download thousands of blocks from the node, the node's RAM usage again goes through the roof. So, now I need to find a way to limit the node's RAM usage after the node is synchronized, when wallets are interacting with it. |
It took so much memory, see:
Run it in a docker, the OS version is:
I have restart the node after synced all blocks. Is it usual? |
This is still a problem with latest monerod release 0.17.1.7 |
@bitsanity this time it is an attack on the network |
Any updates on this network attack? |
@sijanec the attack vector has been fixed in v0.17.1.9 |
Well my issue is probably unrelated, but I am running a freshly compiled Thanks for the response, @selsta P. S.: Apart from that the blockchain was corrputed and further starts of
P. S.: The node that crashed the computer was ran without any command line options for optimized work on a low-end setup, I am trying this now and it is still in the process of syncing. |
Okay, it crashed again, it's not usable for me on a rock64. Anyone else?
|
@sijanec another person reported similar issues with a rock64. Can you join #monero on IRC then someone might be able to help with debugging this issue. |
Running Monerod with less than 4GB of RAM and the stock settings seems to pretty reliably result in the kernel / OS killing the daemon with an OOM error.
This is what dmesg looks like on my machine when this happens:
Here's some conversation about this happening on reddit.
It seems like running with
--limit-rate=400
and--max-concurrency=1
resolve my issues, allowing monerod to sync using 1.3GB of my 2GB of RAM.The text was updated successfully, but these errors were encountered: