New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compilation/Completion server memory leak #5735
Comments
That's gonna be fun to investigate... |
There are not much places where the memory can leak. You can use |
I have zero ideas what could cause a memory leak. The only new cache I added is the directories. |
@kevinresol try to change one of your macro files after the memory has grown, this will force a discard of the macro context so we can check if the memory comes from here. |
Here you are: http://pastebin.com/Kw73P9WT |
This one I will try a bit later. |
I have quite a few macros that generate types, and the names are incremental, like These types are generated based on another type. In short, there is a Type T1, T2. There is a macro that reads type T1 and T2 and generate G1, G2. |
That indeed sounds like a user-made memory-leak. Maybe we should do something like discard modules that haven't been used for x (compilation) requests. However, the compilation server should still not have any leaks, I think. |
There's definitely too many JsonContextXXX in this log :) closing since it seems user-triggered. |
But there's a LEAK there too... |
What is a leak actually? |
Bad. Only Nicolas knows more. |
What I see are hundreds of The question is that should we fix that by discarding old unused modules (or even full contexts) ? I'll open a separate issue for this but the reason for the memory growth here seems clear : generating different modules with unique names does not play well with compilation cache server. |
I will investigate how much of this ballooning comes from my end but I would remind you at the same time that "This does not seem to happen with 3.3.0-rc.1 but only on latest git version" - so there must be more to it. Any hints as to what might have changed between those versions? I vaguely remember @nadako did some optimization to avoid creating new contexts. |
Using vshaxe I found that the haxe process uses excessive memory (which can reach >80% of total machine RAM) and causes the whole machine unresponsive. This does not seem to happen with 3.3.0-rc.1 but only on latest git version.
I am working on a rather big and macro-heavy project and the compilation time ranges from 15-30s.
In the following screenshot, the
haxe
process using ~5GB ram is the one started by vshaxe. The otherhaxe
process is one started manually withhaxe --wait
, which I think would suffer from the same problem eventually but not now just because it is not "used" as frequently as the completion server.The text was updated successfully, but these errors were encountered: