-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Make layer_norm dispatch from yaml file to fix XLA test #28051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D17939919 |
ac0d944 to
fdf41b3
Compare
|
link to #27633 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @BIT-silence!! I only have a minor comment inlined in the PR.
aten/src/ATen/native/layer_norm.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is applicable everywhere, but in general we put _cpu functions in native/cpu folder and _cuda functions in native/cuda folder to make the structure more clear. Would you mind moving them a bit?
There's one minor implication that for the functions defined outside cpu/ and cuda/ folder they are mostly called through TypeDefault and it should call into at:: functions, but inside cpu/ and cuda we are free to call "local" at::native:: functions to speed things up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little confused here. In my understanding as the https://github.com/pytorch/pytorch/blob/dfa48f9942e95b13875c4b8841e1a3c01f3638a6/aten/src/ATen/native/DispatchStub.h mentioned. native/cpu and native/cuda folders are mainly for kernels, and in native folder we defines dispatches and call them, that what we did here. Another example is Activation.cpp
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Activation.cpp#L343
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea nvm, it turns out we don't have this separation at all. :P
But it actually worth noticing that in _cpu and _cuda functions we can freely call at::native:: functions and that doesn't break XLA, but in other non-dispatch functions we can only call at:: functions.
I've seen people confused about which to use in the past. Maybe we can think about how to organize these better? cc: @gchanan @dzhulgakov
But that's not orthogonal to this PR for sure. Thanks for reminding!
fdf41b3 to
455f779
Compare
|
This pull request was exported from Phabricator. Differential Revision: D17939919 |
Summary: Pull Request resolved: pytorch#28051 Make layer_norm dispatch from yaml file to fix XLA test Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm" Reviewed By: houseroad Differential Revision: D17939919 fbshipit-source-id: 38c50c96b4494b727d8ed90d983e3a150d0e5b8d
455f779 to
b95b36b
Compare
|
This pull request was exported from Phabricator. Differential Revision: D17939919 |
Summary: Pull Request resolved: pytorch/pytorch#28051 Make layer_norm dispatch from yaml file to fix XLA test Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm" Reviewed By: houseroad Differential Revision: D17939919 fbshipit-source-id: 384b6a8008dabfc1aaeb0357c1bd195be68f1edb
|
This pull request has been merged in 4f1f084. |
Summary: Pull Request resolved: pytorch#28051 Make layer_norm dispatch from yaml file to fix XLA test Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm" Reviewed By: houseroad Differential Revision: D17939919 fbshipit-source-id: 384b6a8008dabfc1aaeb0357c1bd195be68f1edb
Summary: Make layer_norm dispatch from yaml file to fix XLA test
Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm"
Differential Revision: D17939919