Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-29036][SQL] SparkThriftServer cancel job after execute() threa…
…d interrupted ### What changes were proposed in this pull request? Discuss in #25611 If cancel() and close() is called very quickly after the query is started, then they may both call cleanup() before Spark Jobs are started. Then sqlContext.sparkContext.cancelJobGroup(statementId) does nothing. But then the execute thread can start the jobs, and only then get interrupted and exit through here. But then it will exit here, and no-one will cancel these jobs and they will keep running even though this execution has exited. So when execute() was interrupted by `cancel()`, when get into catch block, we should call canJobGroup again to make sure the job was canceled. ### Why are the changes needed? ### Does this PR introduce any user-facing change? NO ### How was this patch tested? MT Closes #25743 from AngersZhuuuu/SPARK-29036. Authored-by: angerszhu <angers.zhu@gmail.com> Signed-off-by: Yuming Wang <wgyumg@gmail.com>
- Loading branch information