-
Notifications
You must be signed in to change notification settings - Fork 71
Memory leak ? #310
Comments
First of all, if you send a pid between nodes, it's converted to a partisan_remote_reference because pid's can't be sent between nodes not connected with distributed Erlang, so it has to convert it to a format that works without disterl. That's what that represents. Second, without seeing the code that's inserting the values into the set, I don't know whether or not this is a bug or not. It could be that you're using the awset incorrectly. |
Thanks for consideration. I simply followed the instruction from readme: From node a: Result: As a remark: By simply adding a little print in the lasp_ets_storage_backend to see what it was recording into ets table, which makes the terminal outputs disgusting but does not modify any behaviour, I see something similar to the screenshot above (that screenshot was on a cluster of 5 nodes) where some references are repeated many times. I don't know why it does this. But the fact it is exact redundancy inside a single record seems strange to me. I will put a screenshot of that little print in the clean just cloned version to see if it's the exact same thing as before (printing the Record inside do_put function just before ets:insert). I might be wrong in my approach, if you believe it's the case, please let me know. |
I want you to try something else. Instead of:
try:
This should help narrow the problem. Related: you really shouldn't be using |
Oh. That surprises me because using self() in the CRDT update line was precisely what is written in the readme example I followed. |
Yes, well. The README hasn't been updated in like 4 years where the software has and we've learned a lot since then. |
Wow. Process size is not increasing anymore and the Record content is much cleaner ! |
I suppose using term_to_binary(node()) instead of self() in the README easily closes this :) |
#312 addresses the README update. |
#313 addresses prevention of non-iolist values as actors to ensure proper node independent serialization. |
When running a cluster of nodes locally for a certain time, they end up crashing for no more memory space with my computer freezing.
At the beginning this was occurring after like 30min with 5 nodes running in a cluster sharing an awset CRDT.
Then I tried reducing the state_interval (to 1000ms instead of 10000ms for example) and noticed this memory problem comes much quicker.
I noticed the faultive was the lasp_ets_storage_backend process since it was quickly increasing to hundreds of MB.
Trying to figure out what it was storing, I tried printing whenever it stores in ets table just to know what was going on...
What it stores looks very strange to me.
I run locally a little cluster of 5 nodes with an awset that is slowly updated (value is update every 10sec and state_interval is also 10000ms). As you can see (I simply outputed the record when lasp_ets_sotrage_backend tries to store in ets), it looks like it stores the state_awset with some extremely redundant information about partisan_remote_reference (there are only 5 nodes but their reference are repeated tons of times).
What you can see on the image is just one record being put in ets table. I don't understand why one record would contain such redundant information.
Any idea about what's going on or is it just a bug...?
The text was updated successfully, but these errors were encountered: