Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always use preferred partitioning for local exchanges #15459

Merged
merged 2 commits into from
Dec 23, 2022
Merged

Always use preferred partitioning for local exchanges #15459

merged 2 commits into from
Dec 23, 2022

Conversation

gaurav8297
Copy link
Member

@gaurav8297 gaurav8297 commented Dec 19, 2022

Description

After this change (#14718) we can always use preferred partitioning for local exchange which is more reliable in terms of memory usage.

Additional context and related issues

Release notes

( ) This is not user-visible or docs only and no release notes are required.
( ) Release notes are required, please propose a release note for me.
( ) Release notes are required, with the following suggested text:

# Section
* Fix some things. ({issue}`issuenumber`)

@sopel39
Copy link
Member

sopel39 commented Dec 22, 2022

there are test failures

With local scaled exchanges in partitioned case
there won't be any performance degradation due to
partition skewness. Therefore, we can always use
preferred insert partitioning when local scaling
is enabled.
private boolean isSingleGatheringExchange(PlanNode node)
{
Optional<PlanNode> result = searchFrom(node)
.where(planNode -> planNode instanceof ExchangeNode)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: In isLocalScaledWriterExchange you search for LOCAL exchange specifically. It seems you can remove the check there too

.orElseGet(() -> getTaskWriterCount(session));
}

private boolean isSingleGatheringExchange(PlanNode node)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: move above isLocalScaledWriterExchange

{
// This check is required because we don't know which writer count to use when exchange is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we could extract decision logic part scaled OR unpart scaled OR part OR unpart from AddLocalExchanges and not rely o plan node serach

}

return partitioningScheme
.map(scheme -> visitPartitionedWriter(node, scheme, source, parentPreferences))
.orElseGet(() -> visitUnpartitionedWriter(node, source, writerTarget));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: would be more coherent to extract visitScaleUnpartitionedWriter

@sopel39
Copy link
Member

sopel39 commented Dec 23, 2022

hit: #15507

@sopel39 sopel39 merged commit da0248d into trinodb:master Dec 23, 2022
@sopel39
Copy link
Member

sopel39 commented Dec 23, 2022

NO RELEASE NOTES

@github-actions github-actions bot added this to the 404 milestone Dec 23, 2022
@sopel39
Copy link
Member

sopel39 commented Dec 23, 2022

@gaurav8297 please increase task.scale-writers.max-writer-count to core count as we are guaranteed that single "unpartitioned" writer won't write to multiple partitions.

Also comment for:

    @Config("task.scale-writers.max-writer-count")
    @ConfigDescription("Maximum number of writers per task up to which scaling will happen if task.scale-writers.enabled is set")

doesn't seem correct as task.scale-writers.enabled also specifies behavior or partitioned scaled writers

@colebow
Copy link
Member

colebow commented Dec 28, 2022

Does this need a release note? cc @sopel39

@sopel39
Copy link
Member

sopel39 commented Dec 28, 2022

@colebow I would put it under #15058 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants