Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WELD-2755 Avoid "weld-preloaders" thread group from being created repeatedly #2880

Closed
wants to merge 2 commits into from

Conversation

TomasHofman
Copy link

@TomasHofman
Copy link
Author

Hello @manovotn , can you check this out?

@manovotn
Copy link
Contributor

Hi @TomasHofman, I will take a look.
What reproducer did you use to test this? Or in other words, what's the easiest way to trigger the app reload with WFLY where this will show.

And last but not least, I suppose you need this fixed for Weld 5.1.x? The master branch is currently for 6.0; there is a separate one for 5.1.

@TomasHofman
Copy link
Author

Yes, I'm probing the territory with 6.0, eventually want to have it backported to 5.1.

The app server reload is done just by executing :reload in JBoss CLI. You can check https://issues.redhat.com/browse/JBEAP-25588 where there is some specific tooling involved, but essentially it should be visible in any profiler.

@manovotn
Copy link
Contributor

Yes, I'm probing the territory with 6.0, eventually want to have it backported to 5.1.

There are no changes yet which would break it, but it's a preparation ground for changes coming in the next CDI (4.1).
Either way, there is no issue backporting it, already added the version to the JIRA.

The app server reload is done just by executing :reload in JBoss CLI. You can check https://issues.redhat.com/browse/JBEAP-25588 where there is some specific tooling involved, but essentially it should be visible in any profiler.

Alright, thanks, will do.

@TomasHofman
Copy link
Author

@manovotn I found couple of other cases that may need fixing, I will update the PR in a bit.

@TomasHofman
Copy link
Author

Done, I think it's ready for review. Testsuite passed locally.

@@ -44,4 +44,9 @@ public Thread newThread(Runnable r) {
thread.setDaemon(true);
return thread;
}

// Holder class to postpone thread group creation until when it's needed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the gain here? I understand that you want a static reference to the group, but how does postponing the creation help? That thread group is referenced from AbstractExecutorServices and as such will be created anyway during any bootstrap. What am I missing? :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thought was that the thread group is not going to be created until someone instantiates FixedThreadPoolExecutorServices or TimingOutFixedThreadPoolExecutorServices (edit: or other AbstractExecutorServices subclass). If it's a simple static field the group is going to be created already when somebody just references the DaemonThreadFactory class, not even having to instantiate it.

There is probably not going to be a lot of difference at the end of the day, I'm OK to update this however you prefer.

private final ExecutorService executor;
private final ObserverNotifier notifier;

public ContainerLifecycleEventPreloader(int threadPoolSize, ObserverNotifier notifier) {
this.executor = Executors.newFixedThreadPool(threadPoolSize,
new DaemonThreadFactory(new ThreadGroup("weld-preloaders"), "weld-preloader-"));
new DaemonThreadFactory(THREAD_GROUP, "weld-preloader-"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that in this case we should just avoid using the ThreadGroup completely. This ExecutorService is a short-lived construct and we don't need to group the threads at all.

@manovotn
Copy link
Contributor

After discussing this with @mkouba, maybe we should drop usage of ThreadGroup altogether from our executors.
We don't really use the groups for anything and it is an overhead.

Two notable remarks to this are:

  • If we make the field static, the same ThreadGroup is going to be referenced from several deployments
    • This means you could see threads from deployments other than your own
    • I am not sure if this could be a security issue but it definitely has a bad smell to it
  • Note that WFLY has its own implementation of Weld's executor services, see this class
    • Whatever we decide to do here, we might need to take similar action in the WFLY codebase

@manovotn
Copy link
Contributor

I'll get back to this on Mon and try to remove the usage of ThreadGroup altogether to see if that's possible and if there are any visible issues with that.

@TomasHofman
Copy link
Author

Removing the thread groups works for me too.

The issue behind this PR is a "memory leak" detected by QE, where we see some class instances accumulating with every app server reload. There are not only thread groups in there, I'm just trying to look at one thing after another and see if they can be eliminated or not. If we find a problem with any of these proposed changes I'm happy to discuss and we may come to conclusion that it has to stay the way it is. After all the memory allocation related to thread groups looks to be tiny, so I don't see this as a must-have.

Regarding Wildfly, I have also set of changes prepared for the Wildfly codebase (including WeldExecutorServices), but I didn't create the PR yet. If you want we can wait for Wildfly PR and then (maybe) merge this when we have some agreement there. So far we've only made a change in Infinispan (infinispan/infinispan#11277).

This means you could see threads from deployments other than your own.

Aren't in general threads from various pools shared between deployments anyway (e.g. in undertow)? I assumed the weld created threads would not be deployment specific, but I deffer to your knowledge here.

Thanks for all the feedback!

@manovotn
Copy link
Contributor

Aren't in general threads from various pools shared between deployments anyway (e.g. in undertow)? I assumed the weld created threads would not be deployment specific, but I deffer to your knowledge here.

I am not sure, my WFLY-fu isn't advanced enough for that :)
But at least WFLY's WeldExecutorServices seem to be shared across all Weld deployment on given WFLY instance so my previous remark is actually wrong and the shared thread group shouldn't be an issue.
That being said, it still seems redundant to have them there in the first place; I am just trying locally if I can get rid of them in both, Weld and WFLY (Weld's executor services there) and have the tests pass.

@manovotn
Copy link
Contributor

@TomasHofman here are some links:

Weld PR - #2881
WFLY PR - wildfly/wildfly#17193

Could you please double check that these work for you? Or rather, check that the QE tests that were used to detect it in the first place are fine with these changes.
Local runs look fine for me but it's always better to have a second pair of eyes.

So far I don't see any issue with this on Weld side but we don't do more complex tests such as with SM - something I hope WFLY CI would show.

@TomasHofman
Copy link
Author

Thanks @manovotn , that looks like it eliminates everything. I'm closing this PR then.

@manovotn
Copy link
Contributor

Alright, if the WFLY side looks good as well, I will issue a similar change for Weld 5.x
Thanks for bringing this to my attention! :)

@TomasHofman
Copy link
Author

Thanks for your help @manovotn !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants