-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Move Concat Linear out of Optimize Numerics #67196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit ad0827e (more details on the Dr. CI page):
2 failures not recognized by patterns:
This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you just remove https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L162 and have it call into OptimizeFrozenGraph ? originally there was some thought of python hackability etc, but we should just remove the duplication.
can you also hook up the pretranspose pass?
[ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we still need to replace the python api to call into this version, that's why no tests are breaking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're probably going to need to remove the MKLDNN Linear tests, that's fine. can we remove https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/passes/frozen_ops_to_mkldnn.cpp#L939 the linear here and say that we get more speedup by pretransposing linear layers than running them in mkldnn
[ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey, some of the tests need to be rewritten with conv
| test_unsupported(nn.Sequential(lin, Add(torch.tensor([20]))), ['1']) | ||
|
|
||
| @unittest.skipIf(not torch._C.has_mkldnn, "MKL-DNN build is disabled") | ||
| def test_mkldnn_fuser_broadcasting(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same with this
| self.assertEqual(mod_to_device(*test_vals_to_device), script_mod(*test_vals_to_device)) | ||
|
|
||
| @unittest.skipIf(not torch._C.has_mkldnn, "MKL-DNN build is disabled") | ||
| def test_collapse_adjacent_conversions(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also this
| self.assertEqual(aten_op(x, inplace=False), m(x).to_dense()) | ||
|
|
||
| @unittest.skipIf(not torch._C.has_mkldnn, "MKL-DNN build is disabled") | ||
| def test_scalar_mul(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also
[ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!!
|
hey - we have an internal user who's optimization is blocked by this. could we try to land it ? |
[ghstack-poisoned]
[ghstack-poisoned]
| The current set of optimizations is: | ||
| The current set of optimizations includes: | ||
| - Dropout Removal | ||
| - Pretranspose Linear Layers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this one
[ghstack-poisoned]
[ghstack-poisoned]
|
@Gamrix has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
This pull request has been reverted by 3d4a6ff. To re-land this change, follow these steps. |
|
This pull request has been reverted by 3d4a6ff. To re-land this change, follow these steps. |
Stack from ghstack:
Differential Revision: D32154788