Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compilation/Completion server memory leak #5735

Closed
kevinresol opened this issue Oct 4, 2016 · 14 comments
Closed

Compilation/Completion server memory leak #5735

kevinresol opened this issue Oct 4, 2016 · 14 comments

Comments

@kevinresol
Copy link
Contributor

kevinresol commented Oct 4, 2016

Using vshaxe I found that the haxe process uses excessive memory (which can reach >80% of total machine RAM) and causes the whole machine unresponsive. This does not seem to happen with 3.3.0-rc.1 but only on latest git version.

I am working on a rather big and macro-heavy project and the compilation time ranges from 15-30s.

In the following screenshot, the haxe process using ~5GB ram is the one started by vshaxe. The other haxe process is one started manually with haxe --wait, which I think would suffer from the same problem eventually but not now just because it is not "used" as frequently as the completion server.

screen shot 2016-10-04 at 4 30 50 pm

@Simn
Copy link
Member

Simn commented Oct 4, 2016

That's gonna be fun to investigate...

@Simn Simn added this to the 3.3.0 milestone Oct 4, 2016
@Simn Simn self-assigned this Oct 4, 2016
@ncannasse
Copy link
Member

There are not much places where the memory can leak.

You can use haxe --connect 6000 --display memory to get information about the memory used by compiler cache. Otherwise it can be cached macro context or other caches that @Simn might have added recently :)

@Simn
Copy link
Member

Simn commented Oct 12, 2016

I have zero ideas what could cause a memory leak. The only new cache I added is the directories.

@ncannasse
Copy link
Member

@kevinresol try to change one of your macro files after the memory has grown, this will force a discard of the macro context so we can check if the memory comes from here.

@kevinresol
Copy link
Contributor Author

Here you are: http://pastebin.com/Kw73P9WT
At the time of capture, Mac's activity monitor shows that the haxe process is using a few GB's memory

@kevinresol
Copy link
Contributor Author

@kevinresol try to change one of your macro files after the memory has grown, this will force a discard of the macro context so we can check if the memory comes from here.

This one I will try a bit later.

@kevinresol
Copy link
Contributor Author

I have quite a few macros that generate types, and the names are incremental, like RoutingContext1, RoutingContext2, etc...

These types are generated based on another type.

In short, there is a Type T1, T2. There is a macro that reads type T1 and T2 and generate G1, G2.
Since the counter is static, when T1 is changed a new type will be generated as G3. At this time G1 is already useless actually. Maybe this caused the memory problem?

@Simn
Copy link
Member

Simn commented Oct 13, 2016

That indeed sounds like a user-made memory-leak. Maybe we should do something like discard modules that haven't been used for x (compilation) requests.

However, the compilation server should still not have any leaks, I think.

@ncannasse
Copy link
Member

There's definitely too many JsonContextXXX in this log :) closing since it seems user-triggered.

@Simn
Copy link
Member

Simn commented Oct 13, 2016

But there's a LEAK there too...

@kevinresol
Copy link
Contributor Author

What is a leak actually?

@Simn
Copy link
Member

Simn commented Oct 13, 2016

What is a leak actually?

Bad. Only Nicolas knows more.

@ncannasse
Copy link
Member

What I see are hundreds of JsonParserXXX with different but almost-same sizes. It does not seem to be a leak since the memory each module takes do not seem to increase based on the number.

The question is that should we fix that by discarding old unused modules (or even full contexts) ? I'll open a separate issue for this but the reason for the memory growth here seems clear : generating different modules with unique names does not play well with compilation cache server.

@back2dos
Copy link
Member

I will investigate how much of this ballooning comes from my end but I would remind you at the same time that "This does not seem to happen with 3.3.0-rc.1 but only on latest git version" - so there must be more to it. Any hints as to what might have changed between those versions? I vaguely remember @nadako did some optimization to avoid creating new contexts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants