Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The daemon is probably leaking memory #18

Closed
ppalaga opened this issue Feb 12, 2020 · 10 comments
Closed

The daemon is probably leaking memory #18

ppalaga opened this issue Feb 12, 2020 · 10 comments
Labels
wontfix This will not be worked on

Comments

@ppalaga
Copy link
Contributor

ppalaga commented Feb 12, 2020

Steps to reproduce:

  • Checkout Update mvnd.builder.rules camel-quarkus#705 that contains updated mvnd rules for Camel Quarkus
  • Build a couple of times using mvnd clean install -DskipTests
  • IIRC after the fourth build or so the build became slow, taking 7+ minutes to complete (the first one was ~45 sec)
  • Repeating the build even more times lead to an OOME in the Quarkus plugin.

Workaround: kill the daemon when it gets slow.

@ppalaga
Copy link
Contributor Author

ppalaga commented Feb 14, 2020

Attaching the jconsole to the daemon shows that the number of loaded classes goes up after each build:

image

The picture shows 3 of 4 consecutive mvnd invocations.

So the working hypothesis could be that we are leaking class loaders.

@gnodet
Copy link
Contributor

gnodet commented Feb 14, 2020

Could you try to configure the daemon with the required options so that it generates a memory dump when hitting the OOM error and attach the generated file here ? This would definitely help diagnosing the issue.

Currently, there's no easy way to pass options to the daemon, so the best would be to change locally the Client code with the required options, then use

mvnd --stop
MAVEN_OPTS="-Ddaemon.debug=true" mvnd ...

@ppalaga
Copy link
Contributor Author

ppalaga commented Oct 22, 2020

This is not that bad as it used to be. After quarkusio/quarkus#12838, I can build Camel Quarkus several times in a row and the UI is still running smoothly and the OOM does not occur. A small increase between the runs still happens though.

@jglick
Copy link
Contributor

jglick commented Nov 3, 2020

Should the daemon not automatically exit after, say, an hour of inactivity?

@ppalaga
Copy link
Contributor Author

ppalaga commented Nov 4, 2020

Should the daemon not automatically exit after, say, an hour of inactivity?

(If you could please create a new issue for unrelated questions next time...) We have quite a complex expiration strategy in place https://github.com/mvndaemon/mvnd/blob/master/daemon/src/main/java/org/jboss/fuse/mvnd/daemon/DaemonExpiration.java#L51-L58 (It comes from Gradle). What you ask about is there as idleTimeout(Server::getIdleTimeout) The default timeout is 3 hrs and it can be customized in ~/.m2/mvnd.properties using the daemon.idleTimeoutMs property.

@gnodet
Copy link
Contributor

gnodet commented Nov 4, 2020

@jglick fwiw, I'm implementing the JVM memory checks.

@gnodet
Copy link
Contributor

gnodet commented Nov 17, 2020

I'm closing this issue, as we have expiration checks in place and I don't think there is any memory leaks at this point.

@gnodet gnodet closed this as completed Nov 17, 2020
@ppalaga
Copy link
Contributor Author

ppalaga commented Nov 17, 2020

I am observing +25 MB Heap on each Camel Quarkus build. That's not much and that does not necessarily have to be mvndaemon's fault. Plugins can also leak memory as seen in case the Quarkus plugin quarkusio/quarkus#12838 .

I am fine with closing this one. We can open a new one if we gather enough new relevant evidence.

@gnodet
Copy link
Contributor

gnodet commented Nov 17, 2020

I am observing +25 MB Heap on each Camel Quarkus build. That's not much and that does not necessarily have to be mvndaemon's fault. Plugins can also leak memory as seen in case the Quarkus plugin quarkusio/quarkus#12838 .

I am fine with closing this one. We can open a new one if we gather enough new relevant evidence.

Isn't this place GC'ed at some point ?

@ppalaga
Copy link
Contributor Author

ppalaga commented Nov 17, 2020

Isn't this place GC'ed at some point ?

The +25 MB are there after running full GC manually.

@ppalaga ppalaga added this to the No fix/wont't fix milestone Nov 17, 2020
@gnodet gnodet added the wontfix This will not be worked on label Dec 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants