Skip to content

[PT2][Quant] Move add/add relu pattern via module partitioner #102397

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

This diff uses module partitioners to find add and add + relu patterns.

Differential Revision: [D46095330](https://our.internmc.facebook.com/intern/diff/D46095330/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented May 26, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/102397

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 3 Unrelated Failures

As of commit dfb2f2c:

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base ae5606b:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

…ner"

This diff uses module partitioners to find add and add + relu patterns.

Differential Revision: [D46095330](https://our.internmc.facebook.com/intern/diff/D46095330/)

[ghstack-poisoned]
…ner"

This diff uses module partitioners to find add and add + relu patterns.

Differential Revision: [D46095330](https://our.internmc.facebook.com/intern/diff/D46095330/)

[ghstack-poisoned]
@@ -13,6 +14,7 @@
{torch.nn.Conv2d, torch.nn.functional.conv2d},
{torch.nn.ReLU, torch.nn.functional.relu, torch.nn.functional.relu_},
{torch.nn.BatchNorm2d, torch.nn.functional.batch_norm},
{torch.add, operator.add},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for providing this helper function? How about the operator.iadd? Should it also been added here?

Copy link
Contributor

@jerryzh168 jerryzh168 May 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think this is needed, thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reply @jerryzh168 @kimishpatel. BTW, I think current _EQUIVALENT_TYPES list has been hardcoded. So PyTorch Extension Libraries such as intel-extension-for-pytorch can't use Module Partition API with customized _EQUIVALENT_TYPES. Do you think is it reasonable to make it customizable? I have drafted a PR here #102516.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think so, we haven't finalized our APIs though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for providing this helper function? How about the operator.iadd? Should it also been added here?

Yes, adding iadd makes sense.

BTW, I think current _EQUIVALENT_TYPES list has been hardcoded. So PyTorch Extension Libraries such as intel-extension-for-pytorch can't use Module Partition API with customized _EQUIVALENT_TYPES.

For this can you say more. I briefly looked at the extension but couldnt quite figure what it is doing. Does it introduce custom nn modules for fusion etc? If so I would ask if fusion is something that can be done via graph rewrite? Any other examples might be helpful as well.

Regarding _EQUIVALENT_TYPES, yeah it can be something tht is extensible, but I am curious to know your usecase.

…ner"

This diff uses module partitioners to find add and add + relu patterns.

Differential Revision: [D46095330](https://our.internmc.facebook.com/intern/diff/D46095330/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/kimishpatel/153/head branch June 8, 2023 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants