-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-13931] Stage can hang if an executor fails while speculated tasks are running #16855
Conversation
Jenkins, this is OK to test |
Can you make the PR and JIRA description something more specific? Maybe "[SPARK-13931] Stage can hang if an executor fails while speculated tasks are running" |
@@ -874,7 +874,8 @@ private[spark] class TaskSetManager( | |||
// and we are not using an external shuffle server which could serve the shuffle outputs. | |||
// The reason is the next stage wouldn't be able to fetch the data from this dead executor | |||
// so we would need to rerun these tasks on other executors. | |||
if (tasks(0).isInstanceOf[ShuffleMapTask] && !env.blockManager.externalShuffleServiceEnabled) { | |||
if (tasks(0).isInstanceOf[ShuffleMapTask] && !env.blockManager.externalShuffleServiceEnabled | |||
&& !isZombie) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: indentation (add two spaces)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I'm concerned that we might need some of the functionality below even when the TSM is a zombie. While the TSM shouldn't tell the DAGScheduler that the task was resubmitted, I think it does need to notify the DAGScheduler that tasks on the executor are finished (otherwise they'll never be marked as finished in the UI, for example), and I also think it needs to properly update the running copies of the task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kayousterhout I think, when TSM is a zombie, resubmitted tasks won't be offered or executed, so it's no need to notify the DAGScheduler that tasks are finished.
} | ||
sched.setDAGScheduler(dagScheduler) | ||
|
||
val tasks = Array.tabulate[Task[_]](1) { i => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you need Array.tabulate here, given that you're only creating one task / task location?
} | ||
val taskSet = new TaskSet(tasks, 0, 0, 0, null) | ||
val manager = new TaskSetManager(sched, taskSet, MAX_TASK_FAILURES) | ||
manager.speculatableTasks += tasks.head.partitionId |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add a comment here about what's going on? I think it would be more clear if you moved this line below task 1. Then before task 1, add a comment saying "Offer host1, which should be accepted as a PROCESS_LOCAL location by the one task in the task set". Then, before this speculatableTasks line, add something like "Mark the task as available for speculation, and then offer another resource, which should be used to launch a speculative copy of the task."
assert(manager.runningTasks == 2) | ||
assert(manager.isZombie == false) | ||
|
||
val directTaskResult = new DirectTaskResult[String](null, Seq()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here, can you add a comment with something like "Complete one copy of the task, which should result in the task set manager being marked as a zombie, because at least one copy of its only task has completed."
val task1 = manager.resourceOffer("execA", "host1", TaskLocality.PROCESS_LOCAL).get | ||
val task2 = manager.resourceOffer("execB", "host2", TaskLocality.ANY).get | ||
|
||
assert(manager.runningTasks == 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you use triple equals here and below? That way Scala Test will print out the expected and actual values automatically.
|
||
// count for Resubmitted tasks | ||
var resubmittedTasks = 0 | ||
val dagScheduler = new FakeDAGScheduler(sc, sched) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than defining your own DAGScheduler, can you use the existing FakeDAGSCheduler, and then use the FakeTaskScheduler to make sure that the task was recorded as ended for the correct reason? (i.e., not for the reason of being resubmitted)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kayousterhout if I use the existing FakeDAGScheduler, I'll remove variable 'resubmittedTasks', then I can't make this test failed before my code modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't work to check that the TaskEndReason was success (and not Resubmitted), like is done here: https://github.com/GavinGavinNo1/spark/blob/24d8d795d26a5b1477cac01f2748c25fb9b74dc5/core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala#L226 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't understand. I'm so confused about how to construct a failed test case before code modified, if I modify it below.
test("taskSetManager should not send Resubmitted tasks after being a zombie") {
// Regression test for SPARK-13931
val conf = new SparkConf().set("spark.speculation", "true")
sc = new SparkContext("local", "test", conf)
val sched = new FakeTaskScheduler(sc, ("execA", "host1"), ("execB", "host2"))
sched.initialize(new FakeSchedulerBackend() {
override def killTask(taskId: Long, executorId: String, interruptThread: Boolean): Unit = {}
})
val dagScheduler = new FakeDAGScheduler(sc, sched)
sched.setDAGScheduler(dagScheduler)
val singleTask = new ShuffleMapTask(0, 0, null, new Partition {
override def index: Int = 0
}, Seq(TaskLocation("host1", "execA")), new Properties, null)
val taskSet = new TaskSet(Array(singleTask), 0, 0, 0, null)
val manager = new TaskSetManager(sched, taskSet, MAX_TASK_FAILURES)
// Offer host1, which should be accepted as a PROCESS_LOCAL location
// by the one task in the task set
val task1 = manager.resourceOffer("execA", "host1", TaskLocality.PROCESS_LOCAL).get
// Mark the task as available for speculation, and then offer another resource,
// which should be used to launch a speculative copy of the task.
manager.speculatableTasks += singleTask.partitionId
val task2 = manager.resourceOffer("execB", "host2", TaskLocality.ANY).get
assert(manager.runningTasks === 2)
assert(manager.isZombie === false)
val directTaskResult = new DirectTaskResult[String](null, Seq()) {
override def value(resultSer: SerializerInstance): String = ""
}
// Complete one copy of the task, which should result in the task set manager
// being marked as a zombie, because at least one copy of its only task has completed.
manager.handleSuccessfulTask(task1.taskId, directTaskResult)
assert(manager.isZombie === true)
assert(sched.endedTasks(0) === Success)
assert(manager.runningTasks === 1)
manager.executorLost("execB", "host2", new SlaveLost())
assert(manager.runningTasks === 0)
assert(sched.endedTasks(0).isInstanceOf[ExecutorLostFailure])
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see I played around with this a bit and the problem is that the TaskSetManager also sends an ExecutorLost task failure for the task that gets resubmitted, so that failure overrides the saved Resubmitted task end reason. It's fine to leave the existing test, but can you just add a comment that says something like "Keep track of the number of tasks that are resubmitted, so that the test can check that no tasks were resubmitted."
} | ||
manager.handleSuccessfulTask(task1.taskId, directTaskResult) | ||
assert(manager.isZombie == true) | ||
assert(resubmittedTasks == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you check that manager.runningTasks is 1 here, and 0 below?
@@ -664,6 +665,63 @@ class TaskSetManagerSuite extends SparkFunSuite with LocalSparkContext with Logg | |||
assert(thrown2.getMessage().contains("bigger than spark.driver.maxResultSize")) | |||
} | |||
|
|||
test("taskSetManager should not send Resubmitted tasks after being a zombie") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make the description here "[SPARK-13931] taskSetManager ..." (and then eliminate the comment below)
|
||
// count for Resubmitted tasks | ||
var resubmittedTasks = 0 | ||
val dagScheduler = new FakeDAGScheduler(sc, sched) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see I played around with this a bit and the problem is that the TaskSetManager also sends an ExecutorLost task failure for the task that gets resubmitted, so that failure overrides the saved Resubmitted task end reason. It's fine to leave the existing test, but can you just add a comment that says something like "Keep track of the number of tasks that are resubmitted, so that the test can check that no tasks were resubmitted."
@kayousterhout OK, I have updated it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last tiny style comment -- then this looks good
var resubmittedTasks = 0 | ||
val dagScheduler = new FakeDAGScheduler(sc, sched) { | ||
override def taskEnded(task: Task[_], reason: TaskEndReason, result: Any, | ||
accumUpdates: Seq[AccumulatorV2[_, _]], taskInfo: TaskInfo): Unit = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you fix the indentation here? (the wrapped parameters should be indented 4 extra spaces from "override")
LGTM thanks for fixing this! I've merged this into master. |
@GavinGavinNo1 are you ZhengYaofeng on JIRA? I want to correctly give you credit on JIRA for fixing this. |
What changes were proposed in this pull request?
When function 'executorLost' is invoked in class 'TaskSetManager', it's significant to judge whether variable 'isZombie' is set to true.
This pull request fixes the following hang:
1.Open speculation switch in the application.
2.Run this app and suppose last task of shuffleMapStage 1 finishes. Let's get the record straight, from the eyes of DAG, this stage really finishes, and from the eyes of TaskSetManager, variable 'isZombie' is set to true, but variable runningTasksSet isn't empty because of speculation.
3.Suddenly, executor 3 is lost. TaskScheduler receiving this signal, invokes all executorLost functions of rootPool's taskSetManagers. DAG receiving this signal, removes all this executor's outputLocs.
4.TaskSetManager adds all this executor's tasks to pendingTasks and tells DAG they will be resubmitted (Attention: possibly not on time).
5.DAG starts to submit a new waitingStage, let's say shuffleMapStage 2, and going to find that shuffleMapStage 1 is its missing parent because some outputLocs are removed due to executor lost. Then DAG submits shuffleMapStage 1 again.
6.DAG still receives Task 'Resubmitted' signal from old taskSetManager, and increases the number of pendingTasks of shuffleMapStage 1 each time. However, old taskSetManager won't resolve new task to submit because its variable 'isZombie' is set to true.
7.Finally shuffleMapStage 1 never finishes in DAG together with all stages depending on it.
How was this patch tested?
It's quite difficult to construct test cases.