New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-29965][core] Ensure that killed executors don't re-register with driver. #26630
Conversation
…th driver. There are 3 different issues that cause the same underlying problem: an executor that the driver has killed during downscaling registers back with the block manager in the driver, and the block manager from that point on keeps trying to contact the dead executor. The first one is that the heartbeat receiver was asking unknown executors to re-register when receiving a heartbeat. That code path only really happens when the executor dies because of a driver killing it, so there's no reason to re-register. The second one is a race between the heartbeat receiver and the DAG scheduler. Both received notifications of an executor's addition and removal asynchronously (the first one via the listener bus *and* an async local RPC, the second via its own separate internal message queue). This led to situations where they disagreed about which executors were really alive; the change makes it so the heartbeat receiver is updated first, and once that's done, then the DAG scheduler can update itself. This ensures the hearbeat receiver knows which executors not to ask to re-register. The third one is because the block manager couldn't differentiate between an unknown executor (like one that's been removed) and an executor that needs to re-register (like one the scheduler decided to unregister because of too many fetch failures). The change adds code in the block manager master to track which executors have been removed, so that instead of asking them to re-register, it just ignores them. While there I simplified the executor shutdown a bit since it was doing some stuff unnecessarily. Tested with existing unit tests, and by repeatedly runnins worklogs on k8s with dynamic allocation; previously I'd hit these different issues somewhat often, with the fixes I'm not able to reproduce them.
Test build #114255 has finished for PR 26630 at commit
|
Retest this please. |
Test build #115345 has finished for PR 26630 at commit
|
@vanzin I see this is old but you just updated, assume no one got to it before and its good to review? |
Test build #117362 has finished for PR 26630 at commit
|
Retest this please. |
Test build #118935 has finished for PR 26630 at commit
|
Retest this please. |
Test build #120694 has finished for PR 26630 at commit
|
@dongjoon-hyun I would love to pick this back up and make sure it compiles/works with the most recent. Is that OK? |
There are 3 different issues that cause the same underlying problem: an executor
that the driver has killed during downscaling registers back with the block
manager in the driver, and the block manager from that point on keeps trying
to contact the dead executor.
The first one is that the heartbeat receiver was asking unknown executors to
re-register when receiving a heartbeat. That code path only really happens
when the executor dies because of a driver killing it, so there's no reason
to re-register.
The second one is a race between the heartbeat receiver and the DAG scheduler.
Both received notifications of an executor's addition and removal
asynchronously (the first one via the listener bus and an async local RPC,
the second via its own separate internal message queue). This led to
situations where they disagreed about which executors were really alive; the
change makes it so the heartbeat receiver is updated first, and once that's
done, then the DAG scheduler can update itself. This ensures the hearbeat
receiver knows which executors not to ask to re-register.
The third one is because the block manager couldn't differentiate between
an unknown executor (like one that's been removed) and an executor that needs
to re-register (like one the scheduler decided to unregister because of
too many fetch failures). The change adds code in the block manager master to
track which executors have been removed, so that instead of asking them to
re-register, it just ignores them.
While there I simplified the executor shutdown a bit since it was doing
some stuff unnecessarily.
Tested with existing unit tests, and by repeatedly running workloads on k8s
with dynamic allocation; previously I'd hit these different issues somewhat
often, with the fixes I'm not able to reproduce them.