New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inductor] Add input generation fn option for autotuning #108242
Conversation
Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e. `values.size(0) == offsets[-1]` + monotonicity). This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types) `torch.Tensors`. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless. In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the `AlgorithmSelectorCache.__call__`). Test Plan: ``` $ python test/inductor/test_max_autotune.py ... ---------------------------------------------------------------------- Ran 17 tests in 80.146s OK ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/108242
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 73de100 with merge base 3a79621 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e. `values.size(0) == offsets[-1]` + monotonicity). This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types) `torch.Tensors`. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless. In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the `AlgorithmSelectorCache.__call__`). Test Plan: ``` $ python test/inductor/test_max_autotune.py ... ---------------------------------------------------------------------- Ran 17 tests in 80.146s OK ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 865a8a5149bb5b61681b8f8acf90e78cb06257c9 Pull Request resolved: #108242
@aakhundov has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e.
values.size(0) == offsets[-1]
+ monotonicity).This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types)
torch.Tensors
. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless.In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the
AlgorithmSelectorCache.__call__
).Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8
Differential Revision: D48831225