Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[inductor] Add input generation fn option for autotuning #108242

Closed
wants to merge 1 commit into from

Conversation

aakhundov
Copy link
Contributor

@aakhundov aakhundov commented Aug 30, 2023

Stack from ghstack (oldest at bottom):

Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e. values.size(0) == offsets[-1] + monotonicity).

This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types) torch.Tensors. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless.

In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the AlgorithmSelectorCache.__call__).

Test Plan:

$ python test/inductor/test_max_autotune.py

...

----------------------------------------------------------------------
Ran 17 tests in 80.146s

OK

Reviewers:

Subscribers:

Tasks:

Tags:

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8

Differential Revision: D48831225

Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e. `values.size(0) == offsets[-1]` + monotonicity).

This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types) `torch.Tensors`. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless.

In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the `AlgorithmSelectorCache.__call__`).

Test Plan:

```
$ python test/inductor/test_max_autotune.py

...

----------------------------------------------------------------------
Ran 17 tests in 80.146s

OK
```

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 30, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/108242

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 1 Pending

As of commit 73de100 with merge base 3a79621 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

aakhundov added a commit that referenced this pull request Aug 30, 2023
Summary: In certain cases, the content of some inputs is important for consistent behavior (and performance signals) of an operator. One example is fbgemm jagged tensor operators where offsets Tensor's content must be consistent with the shape of the values Tensor (i.e. `values.size(0) == offsets[-1]` + monotonicity).

This is particularily important in the context of autotuning, where the inputs are currently generated as random (for float types) or all-zero (for int types) `torch.Tensors`. Even if the extern kernel and Triton tempalte are robust enough to tolerate improper input content, the performance signals would likely be useless.

In this PR, we add an option to pass input-generating functions for a subset of inputs of the autotuned op (to the `AlgorithmSelectorCache.__call__`).

Test Plan:

```
$ python test/inductor/test_max_autotune.py

...

----------------------------------------------------------------------
Ran 17 tests in 80.146s

OK
```

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 865a8a5149bb5b61681b8f8acf90e78cb06257c9
Pull Request resolved: #108242
@aakhundov aakhundov changed the title [inductor] Add input generation fn option to autotuning [inductor] Add input generation fn option for autotuning Aug 30, 2023
@aakhundov aakhundov added ciflow/trunk Trigger trunk jobs on your pull request topic: not user facing topic category labels Aug 30, 2023
@aakhundov
Copy link
Contributor Author

@aakhundov has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@aakhundov
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/aakhundov/2/head branch September 3, 2023 14:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants