Skip to content

Conversation

@xiaomengy
Copy link
Contributor

Summary: Make layer_norm dispatch from yaml file to fix XLA test

Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm"

Differential Revision: D17939919

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D17939919

@xiaomengy
Copy link
Contributor Author

link to #27633

Copy link
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @BIT-silence!! I only have a minor comment inlined in the PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is applicable everywhere, but in general we put _cpu functions in native/cpu folder and _cuda functions in native/cuda folder to make the structure more clear. Would you mind moving them a bit?
There's one minor implication that for the functions defined outside cpu/ and cuda/ folder they are mostly called through TypeDefault and it should call into at:: functions, but inside cpu/ and cuda we are free to call "local" at::native:: functions to speed things up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little confused here. In my understanding as the https://github.com/pytorch/pytorch/blob/dfa48f9942e95b13875c4b8841e1a3c01f3638a6/aten/src/ATen/native/DispatchStub.h mentioned. native/cpu and native/cuda folders are mainly for kernels, and in native folder we defines dispatches and call them, that what we did here. Another example is Activation.cpp
https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Activation.cpp#L343

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea nvm, it turns out we don't have this separation at all. :P
But it actually worth noticing that in _cpu and _cuda functions we can freely call at::native:: functions and that doesn't break XLA, but in other non-dispatch functions we can only call at:: functions.
I've seen people confused about which to use in the past. Maybe we can think about how to organize these better? cc: @gchanan @dzhulgakov
But that's not orthogonal to this PR for sure. Thanks for reminding!

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D17939919

Summary:
Pull Request resolved: pytorch#28051

Make layer_norm dispatch from yaml file to fix XLA test

Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm"

Reviewed By: houseroad

Differential Revision: D17939919

fbshipit-source-id: 38c50c96b4494b727d8ed90d983e3a150d0e5b8d
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D17939919

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 16, 2019
Summary:
Pull Request resolved: pytorch/pytorch#28051

Make layer_norm dispatch from yaml file to fix XLA test

Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm"

Reviewed By: houseroad

Differential Revision: D17939919

fbshipit-source-id: 384b6a8008dabfc1aaeb0357c1bd195be68f1edb
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 4f1f084.

@xiaomengy xiaomengy deleted the export-D17939919 branch October 16, 2019 17:52
thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary:
Pull Request resolved: pytorch#28051

Make layer_norm dispatch from yaml file to fix XLA test

Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "LayerNorm"

Reviewed By: houseroad

Differential Revision: D17939919

fbshipit-source-id: 384b6a8008dabfc1aaeb0357c1bd195be68f1edb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants