Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-37355][CORE]Avoid Block Manager registrations when Executor is shutting down #34629

Closed
wants to merge 4 commits into from

Conversation

wankunde
Copy link
Contributor

@wankunde wankunde commented Nov 17, 2021

What changes were proposed in this pull request?

Avoid BlockManager registrations when executor is shutting down.

Why are the changes needed?

The block manager should not do re-register if the executor is shutting down by driver.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Existing tests.

@github-actions github-actions bot added the CORE label Nov 17, 2021
@wankunde wankunde changed the title [SPARK-37355]Avoid Block Manager registrations when Executor is shutt… [SPARK-37355][CORE]Avoid Block Manager registrations when Executor is shutt… Nov 17, 2021
@wankunde wankunde changed the title [SPARK-37355][CORE]Avoid Block Manager registrations when Executor is shutt… [SPARK-37355][CORE]Avoid Block Manager registrations when Executor is shutting down Nov 17, 2021
@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@wankunde
Copy link
Contributor Author

Hi, @Ngone51 could you help me to review this PR ?

master.registerBlockManager(blockManagerId, diskBlockManager.localDirsString, maxOnHeapMemory,
maxOffHeapMemory, storageEndpoint)
reportAllBlocks()
SparkContext.getActive.map { context =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wankunde The problem here is it's a race condition issue - the reregistration request could be sent before the executor is stopped by the driver. So this kind of protection can't resolve the issue thoroughly.

(BTW, I think we only have SparkContext on the driver side.)

The problem is #34536 now is we can't handle the case you mentioned there. The reason the fix can't handle is that HeartbeatReceiver doesn't know the existence of the BlockManager in that case. So I think we can let HeartbeatReceiver implements onBlockManagerAdded to listen on the registration of the BlockManager so that HeartbeatReceiver knows the BlockManager too in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ngone51 Thanks for your review. It's wrong to judge whether the blockManager is stopping, maybe we can use SparkEnv instead.

master.registerBlockManager(blockManagerId, diskBlockManager.localDirsString, maxOnHeapMemory,
maxOffHeapMemory, storageEndpoint)
reportAllBlocks()
if (!SparkEnv.get.isStopped) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm...so, let's assume such a scenario: driver command a shutdown request to the executor. Before the executor receives the shutdown request, it finds that it itself has been removed from the driver when reporting a block to the driver. Then, the executor starts to reregister itself. Since the executor hasn't received shutdown request, which means SparkEnv.get.isStopped=false, it successfully reregisters with the driver. Soon, the executor receives the shutdown request and exits itself.

So, this change doesn't fix the issue thoroughly, right? @wankunde

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ngone51 Thanks for you review.

Yes, this PR can not fix the issue above, but I also think that adding !SparkEnv.get.isStopped constraint is helpful as I have found several executors re-register when they are shutting down by driver.

I very agree to fix this issue in HeartbeatReceiver and this PR can be closed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can either update in this PR or in a separate PR as you like.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the PR description.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I mean you can update the fix with HeartbeatReceiver in this PR.

@Ngone51
Copy link
Member

Ngone51 commented Dec 23, 2021

Hi @wankunde do you want to proceed with the HeartbeatReceiver approach?

@github-actions
Copy link

github-actions bot commented Apr 3, 2022

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

@github-actions github-actions bot added the Stale label Apr 3, 2022
@github-actions github-actions bot closed this Apr 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants