-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-37355][CORE]Avoid Block Manager registrations when Executor is shutting down #34629
Conversation
Can one of the admins verify this patch? |
Hi, @Ngone51 could you help me to review this PR ? |
master.registerBlockManager(blockManagerId, diskBlockManager.localDirsString, maxOnHeapMemory, | ||
maxOffHeapMemory, storageEndpoint) | ||
reportAllBlocks() | ||
SparkContext.getActive.map { context => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wankunde The problem here is it's a race condition issue - the reregistration request could be sent before the executor is stopped by the driver. So this kind of protection can't resolve the issue thoroughly.
(BTW, I think we only have SparkContext on the driver side.)
The problem is #34536 now is we can't handle the case you mentioned there. The reason the fix can't handle is that HeartbeatReceiver
doesn't know the existence of the BlockManager in that case. So I think we can let HeartbeatReceiver
implements onBlockManagerAdded
to listen on the registration of the BlockManager so that HeartbeatReceiver
knows the BlockManager too in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ngone51 Thanks for your review. It's wrong to judge whether the blockManager is stopping, maybe we can use SparkEnv
instead.
master.registerBlockManager(blockManagerId, diskBlockManager.localDirsString, maxOnHeapMemory, | ||
maxOffHeapMemory, storageEndpoint) | ||
reportAllBlocks() | ||
if (!SparkEnv.get.isStopped) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm...so, let's assume such a scenario: driver command a shutdown request to the executor. Before the executor receives the shutdown request, it finds that it itself has been removed from the driver when reporting a block to the driver. Then, the executor starts to reregister itself. Since the executor hasn't received shutdown request, which means SparkEnv.get.isStopped=false
, it successfully reregisters with the driver. Soon, the executor receives the shutdown request and exits itself.
So, this change doesn't fix the issue thoroughly, right? @wankunde
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ngone51 Thanks for you review.
Yes, this PR can not fix the issue above, but I also think that adding !SparkEnv.get.isStopped
constraint is helpful as I have found several executors re-register when they are shutting down by driver.
I very agree to fix this issue in HeartbeatReceiver
and this PR can be closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can either update in this PR or in a separate PR as you like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the PR description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I mean you can update the fix with HeartbeatReceiver
in this PR.
Hi @wankunde do you want to proceed with the |
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
What changes were proposed in this pull request?
Avoid BlockManager registrations when executor is shutting down.
Why are the changes needed?
The block manager should not do re-register if the executor is shutting down by driver.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Existing tests.