New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JIT] Update freezing api #52337
[JIT] Update freezing api #52337
Conversation
💊 CI failures summary and remediationsAs of commit 0a3beca (more details on the Dr. CI page):
🚧 5 fixed upstream failures:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable, stamping to unblock you. @suo can chime in too
torch._C._jit_pass_remove_dropout(mod._c) | ||
if optimize_numerics: | ||
# run a couple times to capture Conv -> Mul -> Add etc | ||
for _ in range(2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not very scientific, you probably want to iterate till the fixed point :) but it's unrelated to this diff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, that's kind of a todo, i'm not convinced it really matters but it would be a good follow up
|
||
return out | ||
|
||
|
||
def optimize_frozen_module(mod): | ||
def optimize_frozen_module(mod, optimize_numerics: bool = True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you get non-portable optimizations in the future - would it be a separate bool
flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, that was what i was envisioning.
def optimize_frozen_module(mod, optimize_numerics: bool = True, non_portable = False):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…te_freezing_api
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eellison has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Update freezing api for 1.8, and add a corresponding C++ API. The `optimize` flag hasn't been publicly released yet, so we are able to change it without breaking BC. I will submit a PR to branch release as well, there are a few more days to do that Pull Request resolved: pytorch#52337 Reviewed By: ejguan Differential Revision: D26491833 Pulled By: eellison fbshipit-source-id: 6dcd74eb8f76db64ac53183d03dabdd0f101f4b5
Update freezing api for 1.8, and add a corresponding C++ API. The
optimize
flag hasn't been publicly released yet, so we are able to change it without breaking BC. I will submit a PR to branch release as well, there are a few more days to do that