Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DefaultMessageListenerContainer#doShutdown hangs [SPR-11841] #16460

Closed
spring-issuemaster opened this issue Jun 2, 2014 · 4 comments

Comments

Projects
None yet
2 participants
@spring-issuemaster
Copy link
Collaborator

commented Jun 2, 2014

Rüdiger Gründel opened SPR-11841 and commented

We have the situation that the method DefaultMessageListenerContainer#doShutdown hangs which is caused by the call “this.lifecycleMonitor.wait()”. I know that the problem is similar to other once in Jira but my question is the following.

It seems there can be a situation where the “lifecycleMonitor.notifyAll()” is called before “this.lifecycleMonitor.wait()” in DefaultMessageListenerContainer#doShutdown is invoked. In that case, the wait will never return. This is what I think I have observed. Wouldn’t it be better to secure the lifecycleMonitor.wait()by a flag which signalizes if the “wait” has to be called or not. The flag can be set together with the call of notifyAll. So is it guaranteed that in all circumstances the “lifecycleMonitor.notifyAll()” is noticed during the shutdown.

We are using 3.1.1.RELEASE but I the saw the same implementation of DefaultMessageListenerContainer#doShutdown in 4.0.2.RELEASE.


Affects: 3.1.1

Attachments:

Issue Links:

  • #16409 DefaultMessageListenerContainer hangs on shutdown
  • #18774 DefaultMessageListenerContainer doesn't shutdown gracefully if long recovery interval is set

Referenced from: commits d398bb7

2 votes, 6 watchers

@spring-issuemaster

This comment has been minimized.

Copy link
Collaborator Author

commented Jun 11, 2014

Stéphane Nicoll commented

Can you provide a bit more details about your situation? There's no deadlock in your dump; does it hang forever? Which JBoss version are you using? Can you provide a full log of the shutdown sequence? Thanks!

@spring-issuemaster

This comment has been minimized.

Copy link
Collaborator Author

commented Dec 18, 2014

Stéphane Nicoll commented

Ping? Without more information, I am afraid we won't be able to help.

@spring-issuemaster

This comment has been minimized.

Copy link
Collaborator Author

commented Apr 7, 2015

Igor E. Poteryaev commented

We have the same situation on shutdown of grails webapp (spring-jms 4.0.6). It happens about 5-10 times per month on our jenkins build server.
Threads dump is attached.
It hangs forever (at least more than 24 hours).
We also have no deadlock in the threads dump.
Please check for the possibility of race condition, when call to lifecycleMonitor.notifyAll() on shutdown is executed earlier then lifecycleMonitor.wait().

Thanks!

@spring-issuemaster

This comment has been minimized.

Copy link
Collaborator Author

commented Apr 16, 2015

Juergen Hoeller commented

Since we're always setting the activeInvokerCount within the same lock as the wait / notifyAll call, it's hard to see how they could be out of sync. And we're only calling wait if the active invoker count is still above 0... An extra flag to track whether notify will be called doesn't help here.

In any case, it doesn't hurt to specify a timeout for the wait call, and our existing receiveTimeout setting seems to be just fine for that - since that's how long an invoker will typically block before returning, in particular after the connection has been stopped on shutdown. That's in 4.2 now.

If there's anything more we can do, please raise a concrete suggestion...

Juergen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.