Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out-of-memory issue #48

Closed
oberstet opened this issue May 15, 2014 · 6 comments
Closed

Out-of-memory issue #48

oberstet opened this issue May 15, 2014 · 6 comments

Comments

@oberstet
Copy link
Contributor

Tonight, a Clandeck worker process was killed by Linux out-of-memory manager.

Crossbar log:

2014-05-15 03:23:35+0000 [Controller  17782] Worker 17791: Process connection gone (A process has ended with a probable error condition: process ended by signal 9.)

Linux kernel log:

grep -Hi kill /var/log/*.log

/var/log/kern.log:May 15 03:23:34 ip-10-74-141-149 kernel: [138938.850931] Crossbar.io Wor invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
/var/log/kern.log:May 15 03:23:34 ip-10-74-141-149 kernel: [138938.850963]  [<ffffffff81142449>] oom_kill_process+0x1a9/0x310
/var/log/kern.log:May 15 03:23:34 ip-10-74-141-149 kernel: [138938.856958] Out of memory: Kill process 17791 (Crossbar.io Wor) score 595 or sacrifice child
/var/log/kern.log:May 15 03:23:34 ip-10-74-141-149 kernel: [138938.856967] Killed process 17791 (Crossbar.io Wor) total-vm:2901316kB, anon-rss:2281184kB, file-rss:0kB

It's unclear what was causing the run away of memory consumption ... needs investigation.

Background:

@pataelmo
Copy link

@oberstet Can you elaborate on this bug or let me know which version is most likely stable for this? I just saw this with version 0.13.2. I had a mostly stock config file (i setup a second realm and gave generic permissions to url path), no extra workers or containers running. I had one client connected (to test stability of both sides) not really doing anything for almost 7 hours before i used up the memory.

Killed process 26203 (crossbar-worker) total-vm:226008kB, anon-rss:108436kB, file-rss:32kB

@meejah
Copy link
Contributor

meejah commented Jan 30, 2017

@pataelmo can you also post versions of the libraries you were using? e.g. Twisted etc? (crossbar version should tell you).

@pataelmo
Copy link

Crossbar.io : 0.13.2
Autobahn : 0.13.1 (with JSON, MessagePack, CBOR)
Twisted : 16.1.1-EPollReactor
LMDB : 0.89/lmdb-0.9.18
Python : 2.7.10/PyPy-5.0.1
OS : Linux-3.13.0-37-generic-x86_64-with-debian-jessie-sid
Machine : x86_64

@pataelmo
Copy link

Do you guys have a standard way of searching for memory leaks? I did a quick search and found some tools, but didn't want to try and build my own wheel if one was available. Happy to inject some code to try and find the source on my end if you can't easily reproduce this.

@pataelmo
Copy link

pataelmo commented Feb 6, 2017

I took another stab at this another way. While it's unfortunate that the memory gets that high, it's definitely not a real leak. I'm sure most of the issue I saw here was due to the way that Python manages memory. I was running this on a cloud VM with only 512MB of RAM along with nginx and mysql. Once I bumped the memory limit upto 1GB I now don't see this problem on 2 separate instances anymore.

@meejah Thanks for replying and I assume looking into this, even though it was nothing.

@meejah
Copy link
Contributor

meejah commented Feb 6, 2017

Are you using TLS? Do you know how many connections you had/have?

Also, PyPy tends to use a lot more memory than CPython (but also tends to be much faster!). Also, PyPy doesn't do the "deterministic immediate collection" of memory as happens on CPython when a refcount goes to 0 (and there's no cycles). Anyway, the point is if you are more constrained on memory it could be the using CPython (and taking a bit of a speed hit) is the right strategy.

Thanks for following up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants