Skip to content

Conversation

bdhirsh
Copy link
Collaborator

@bdhirsh bdhirsh commented Apr 19, 2021

This PR updates the build process to use in-tree pytorch codegen, which is getting merged here: pytorch/pytorch#56601 (more details on the codegen are in that PR). Doing so requires a new yaml file, xla_native_functions.yaml.

I also updated the docs to reflect the codegen change. I'm going to save deleting the existing codegen logic for a future PR

@bdhirsh bdhirsh force-pushed the public_codegen_api2 branch from 133a488 to 0dd4eee Compare April 19, 2021 20:41
Copy link
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @bdhirsh !

@JackCaoG
Copy link
Collaborator

@bdhirsh I have a question regarding ops that has different signatures. for example in #2900, clamp has two different signatures that takes min and max as scalar or tensor. If I only want to lower one of the variants, how should I do that?

@JackCaoG
Copy link
Collaborator

Could you also update https://github.com/pytorch/xla/blob/master/OP_LOWERING_GUIDE.md to reflect this new yaml file?

@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Apr 21, 2021

@JackCaoG

have a question regarding ops that has different signatures. for example in #2900, clamp has two different signatures that takes min and max as scalar or tensor. If I only want to lower one of the variants, how should I do that?

The operator names come from the in-tree file native_functions.yaml, and follow a convention of being named as <name>.<overload_name> when overloads are required, in order to make them unique without needing schema. In the example you linked with clamp, there's a corresponding change in native_functions.yaml that adds the new clamp overload here (the new operator there is named clamp.Tensor).

In general you can look at native_functions.yaml in-tree to see what the full names of the operators are, when you need to figure out the name of the operator to add the the yaml. There are already a bunch in the xla_native_functions.yaml file included in this PR (just search for .out or .Tensor in that file).

I'll also make sure that's all in the documentation!

Could you also update https://github.com/pytorch/xla/blob/master/OP_LOWERING_GUIDE.md to reflect this new yaml file?

Yep! I'll let you know when I've updated it so you can take a look at the changes

@bdhirsh
Copy link
Collaborator Author

bdhirsh commented Apr 21, 2021

updated the docs. I'll need to update the docs more in a later PR, e.g. when I change the file names.

I also unpinned this PR from my feature branch, so CI will keep failing until my other PR merges to master

Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @bdhirsh ! This looks great. Could you wait for #2891 to land first (I chatted with @mruberry and the upstream pytorch pr should merge pretty soon) and update your pr?

I can take care of the rest of the pending op lowering pr and rebase those prs.

@bdhirsh bdhirsh force-pushed the public_codegen_api2 branch from afa237f to 3addce5 Compare April 23, 2021 22:50
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 29, 2021
I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898.

I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually.


I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway.


Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 29, 2021
…omposite kernels"

I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898.

I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually.


I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway.


Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667)

[ghstack-poisoned]
bdhirsh added a commit to pytorch/pytorch that referenced this pull request Apr 29, 2021
I fixed `RegistrationDeclarations.yaml` in a previous PR to account for codegen'd composite kernels, but I forgot to make the corresponding change in the new external codegen. This change needs to land before pytorch/xla#2898.

I also moved the function out of `gen.py`, since RegistrationDeclarations.yaml isn't really the main use case for the function anymore- we want to kill it eventually.


I also just tried leaving the function in `gen.py` initially and calling `from tools.codegen.gen import has_autogenerated_composite_kernel`. But python didn't like that, and I couldn't figure out why. As a quick attempt to debug, I printed `import tools.codegen; dir(tools.codegen)`, and for some reason `gen` wasn't showing up in the list, even though other subfolders were. This doesn't matter too much though, since I think moving the function out of `gen.py` is the right move anyway.


Differential Revision: [D28012667](https://our.internmc.facebook.com/intern/diff/D28012667)

[ghstack-poisoned]
@bdhirsh bdhirsh force-pushed the public_codegen_api2 branch from 3addce5 to 664bee1 Compare May 3, 2021 17:48
@bdhirsh bdhirsh force-pushed the public_codegen_api2 branch from 664bee1 to 2639e8d Compare May 10, 2021 12:08
@bdhirsh bdhirsh merged commit 32fe908 into master May 10, 2021
@bdhirsh bdhirsh deleted the public_codegen_api2 branch May 10, 2021 14:42
bdhirsh added a commit that referenced this pull request May 10, 2021
bdhirsh added a commit that referenced this pull request May 10, 2021
@bdhirsh bdhirsh restored the public_codegen_api2 branch May 10, 2021 20:18
bdhirsh added a commit that referenced this pull request May 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants