New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add configurations to support torch.compile in Runner #976
Conversation
921293c
to
ff7a2a6
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #976 +/- ##
=======================================
Coverage ? 76.61%
=======================================
Files ? 139
Lines ? 10954
Branches ? 2190
=======================================
Hits ? 8392
Misses ? 2201
Partials ? 361
Flags with carried forward coverage won't be shown. Click here to find out more. Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Support
torch.compile
introduced in PyTorch 2.0Modification
Check for
compile
keyword in config dict. If not None, will calltorch.compile
according to its options.BC-breaking (Optional)
No
Use cases (Optional)
Add a single line (or a few lines) in your config file.
Basic Use Cases
Advanced Use Cases
You can pass arguments to
torch.compile
, as described in PyTorch DocumentationDue to a PyTorch mkldnn issue, we will not compile
val_step
andtest_step
unless the issue has been fixed.Checklist