New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-36842][Core] TaskSchedulerImpl - stop TaskResultGetter properly #34098
[SPARK-36842][Core] TaskSchedulerImpl - stop TaskResultGetter properly #34098
Conversation
cc @mridulm, @Ngone51 and @tgravescs FYI |
The change looks good to me. |
sure, let me do it as well |
ok to test |
Kubernetes integration test starting |
Kubernetes integration test status failure |
@@ -928,13 +928,17 @@ private[spark] class TaskSchedulerImpl( | |||
override def stop(): Unit = { | |||
speculationScheduler.shutdown() | |||
if (backend != null) { | |||
backend.stop() | |||
Utils.tryLogNonFatalError { | |||
backend.stop() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the exception you encountered here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when deployed in yarn, org.apache.spark.scheduler.cluster.YarnSchedulerBackend#stop will call requestTotalExecutors() on stop. If the yarn application is killed already, it will receive an IOException on sending the RPC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking more, what is the exception thrown in barrierCoordinator.stop
?
The should be defensive, and should not have resulted in failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about wrapping the others too?
Test build #143637 has finished for PR 34098 at commit
|
I've checked |
change look fine to me, it would be nice to have the stack trace on the exception thrown. |
We have the following which can throw exceptions:
Note that |
|
Thanks for digging more @lxian !
Both of these below are related to
I agree with @lxian, that is not caught by Given the above, can we address the potential issue with Sink.close ? |
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
What changes were proposed in this pull request?
Catch exception during TaskSchedulerImpl.stop() so that all components can be stopped properly
Why are the changes needed?
Otherwise some threads won't be stopped during spark session restart
Does this PR introduce any user-facing change?
NO
How was this patch tested?
It's tested by