-
Notifications
You must be signed in to change notification settings - Fork 28k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-48292][CORE] Revert [SPARK-39195][SQL] Spark OutputCommitCoordinator should abort stage when committed file not consistent with task status #46696
Conversation
…nator should abort stage when committed file not consistent with task status
ping @cloud-fan |
// Regression test for SPARK-10381 | ||
val e = intercept[SparkException] { | ||
failAfter(Span(60, Seconds)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we still check the error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't throw error after revert...., it can run success.
Will we hit file already exist exception in this case? |
can we also revert #46562 in this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. Also, +1 to include the additional revert into here.
cc @viirya
Looks good to me. |
Done |
GA passed cc @cloud-fan |
thanks, merging to master! |
…inator should abort stage when committed file not consistent with task status ### What changes were proposed in this pull request? Revert apache#36564 According to discuss apache#36564 (comment) When spark commit task will commit to committedTaskPath `${outputpath}/_temporary//${appAttempId}/${taskId}` So in apache#36564 's case, since before apache#38980, each task's job id's date is not the same, when the task writes data success but fails to send back TaskSuccess RPC, the task rerun will commit to a different committedTaskPath then causing data duplicated. After apache#38980, for the same task's different attempts, the TaskId is the same now, when re-run task commit, will commit to the same committedTaskPath, and hadoop CommitProtocol will handle such case then data won't be duplicated. Note: The taskAttemptPath is not same since in the path contains the taskAttemptId. ### Why are the changes needed? No need anymore ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Existed UT ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#46696 from AngersZhuuuu/SPARK-48292. Authored-by: Angerszhuuuu <angers.zhu@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
What changes were proposed in this pull request?
Revert #36564 According to discuss #36564 (comment)
When spark commit task will commit to committedTaskPath
${outputpath}/_temporary//${appAttempId}/${taskId}
So in #36564 's case, since before #38980, each task's job id's date is not the same, when the task writes data success but fails to send back TaskSuccess RPC, the task rerun will commit to a different committedTaskPath then causing data duplicated.
After #38980, for the same task's different attempts, the TaskId is the same now, when re-run task commit, will commit to the same committedTaskPath, and hadoop CommitProtocol will handle such case then data won't be duplicated.
Note: The taskAttemptPath is not same since in the path contains the taskAttemptId.
Why are the changes needed?
No need anymore
Does this PR introduce any user-facing change?
No
How was this patch tested?
Existed UT
Was this patch authored or co-authored using generative AI tooling?
No