Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix scheduling for splits with locality requirement in Tardigrade #11581

Merged
merged 5 commits into from Mar 22, 2022

Conversation

arhimondr
Copy link
Contributor

@arhimondr arhimondr commented Mar 19, 2022

Description

Fixes several problems related to scheduling of non remotely accessible splits

Is this change a fix, improvement, new feature, refactoring, or other?

Fix

Is this a change to the core query engine, a connector, client library, or the SPI interfaces? (be specific)

Core engine (Tardigrade)

How would you describe this change to a non-technical end user or system administrator?

Fixes scheduling for non remotely accessible splits in certain corner cases. Prior this fix in some queries scanning over non remotely accessible splits might've been failing.

Related issues, pull requests, and links

Documentation

(x) No documentation is needed.
( ) Sufficient documentation is included in this PR.
( ) Documentation PR is available with #prnumber.
( ) Documentation issue #issuenumber is filed, and can be handled later.

Release notes

() No release notes entries required.
(x) Release notes entries required with the following suggested text:

# Tardigrade
* Fix scheduling for non remotely accessible splits 

@cla-bot cla-bot bot added the cla-signed label Mar 19, 2022
@arhimondr arhimondr force-pushed the scheduling-fixes branch 2 times, most recently from 3b17f93 to 473aea1 Compare March 21, 2022 17:56
@arhimondr arhimondr changed the title [WIP] Various scheduling related fixes for Tardigrade Fix scheduling for splits with locality requirement in Tardigrade Mar 21, 2022
InternalNode node = bucketNodeMap.getAssignedNode(bucket)
.orElseThrow(() -> new IllegalStateException("Nodes are expected to be assigned for non dynamic BucketNodeMap"));
Integer partitionId = nodeToPartition.get(node);
if (partitionId == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would that make sense to have more than one partition on single node?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, interesting question. For example to make tasks smaller for more granular retries?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah for example. For partitions, we should probably opt for those to be similarly sized. Building partitions on top of the enforced bucket<->node mapping does not necessarily imply that. Not something that we need to address here. Just a random thought.

HostAddress existingValue = partitionToNodeMap.put(partition, bucketNodeMap.getAssignedNode(split).get().getHostAndPort());
checkState(existingValue == null, "host already assigned for partition %s: %s", partition, existingValue);
HostAddress requiredAddress = bucketNodeMap.getAssignedNode(split).get().getHostAndPort();
Set<HostAddress> existingRequirement = partitionToNodeMap.get(partition);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems existingRequirement will have at most one element. Then we don't really need a set?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A split has a list of hosts in it's requirement. The set is needed to support split specific requirements.

}
}
else {
BiMap<InternalNode, Integer> nodeToPartition = HashBiMap.create();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it' beneficial to add a comment explaining the logic, e.g. "make sure all buckets mapped to the same node map to the same partition, such that locality requirements are respected in scheduling".

Otherwise queries like "SHOW TABLES" won't work
Make it consistent with locality requirements defined in splits
To allow scheduling of coordinator only tasks and splits
@arhimondr arhimondr merged commit d877d73 into trinodb:master Mar 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

None yet

3 participants