New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Timer resource teardown #16289
Fix Timer resource teardown #16289
Conversation
libs-scala/resources/src/main/scala/com/digitalasset/resources/TimerResourceOwner.scala
Show resolved
Hide resolved
libs-scala/resources/src/main/scala/com/digitalasset/resources/TimerResourceOwner.scala
Show resolved
Hide resolved
After this PR the TimerResourceOwner will make sure that none of the scheduled task are running, after the Resource finished releasing. Also adds asynchronous test as evidence. [CHANGELOG_BEGIN] [CHANGELOG_END]
5c593f2
to
a40c59c
Compare
info("And as waiting 100 millis") | ||
released.isCompleted shouldBe false | ||
info("The release should not be completed yet") | ||
val extraTimerTaskStarted = Promise[Unit]() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be just an atomic boolean as you just check for future completeness
timer.schedule( | ||
new TimerTask { | ||
override def run(): Unit = { | ||
Await.result(finishLongTakingTask.future, Duration(10, "seconds")) | ||
} | ||
}, | ||
0L, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: the whole timer.schedule
could be extracted into a small def
and reused 3 more times :) not a big concern though
After this PR the TimerResourceOwner will make sure that none of the scheduled task are running, after the Resource finished releasing. Also adds asynchronous test as evidence.
As the system is going down, and we reach the DbDispatcher, then we first tear down the executor pool, then the timer, then the hikari connection pool.
before: after the timer had gone down, we still could have had a running db check, which was racing with the Hikari tear down, It could have resulted in warnings in logs pointing to some error with connection.
after: no such warnings: Hikari only goes sown after the timer released, and all timer task finished
[CHANGELOG_BEGIN]
[CHANGELOG_END]