Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update XLA torch.gather heuristics for multi-platforms #3587

Closed
yeounoh opened this issue May 19, 2022 · 3 comments
Closed

Update XLA torch.gather heuristics for multi-platforms #3587

yeounoh opened this issue May 19, 2022 · 3 comments
Assignees

Comments

@yeounoh
Copy link
Collaborator

yeounoh commented May 19, 2022

馃殌 Feature

There are two code paths for composing and executing torch.gather in PyTorch/XLA, that relies on a custom heuristic to decide between the two. The heuristic requires a major update to better serve various platforms and use cases. cc @ymwangg @ronghanghu

Motivation

Based on the recent study for GPU/CPU, and also experiencing some related performance regression issue on TPU, we feel strongly that we would need a better heuristics to serve different platforms.

Pitch

There were a couple of different strategies employed to address the issue:

  • forcing one type of torch.gather implementation always,
  • making the decision based on the number of input tensor elements (w.r.t. the indices) and allow user to change/try a different threshold via setting an env var.

Both approaches had clear shortcomings, and had a diverging behaviors on different platforms. We propose that we improve our heuristics in the following ways:

  • Pre-condition on the platform type
  • Update the heuristics with a more conservative default threshold (advanced user can still change/try their own via setting an env var)
  • Use the threshold-based heuristics only for TPU platform, for others

Alternatives

One of the two approaches that we've tried already.

@yeounoh yeounoh self-assigned this May 19, 2022
@yeounoh
Copy link
Collaborator Author

yeounoh commented May 19, 2022

Hi @ymwangg I will follow up with you once I create a PR to address this -- stay tuned!

@JackCaoG
Copy link
Collaborator

@yeounoh Can this be closed?

@yeounoh
Copy link
Collaborator Author

yeounoh commented Aug 31, 2022

This was fixed with #3629, closing.

@yeounoh yeounoh closed this as completed Aug 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants