Skip to content

Commit

Permalink
Tag functions to core IR in native_functions.yaml (#105849)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #105849

Based on operator review meetings, tag appropriate functions as part of the Core IR.

[Operator Review Tracking Sheet](https://docs.google.com/spreadsheets/d/1u9jQ-uGlKu-fe9nLy-jS2AIPtpE8sGTmELOFYgKOhXU/edit#gid=0)

Test Plan: Use N3940835 to load the YAML and check updated core op list.

Reviewed By: mergennachin, kimishpatel, SherlockNoMad

Differential Revision: D47673670

fbshipit-source-id: 307c157fc79fe812b524dc108a4923b2deb33f8a
  • Loading branch information
SS-JIA authored and facebook-github-bot committed Jul 25, 2023
1 parent 3eef86d commit 69b08ba
Showing 1 changed file with 11 additions and 7 deletions.
18 changes: 11 additions & 7 deletions aten/src/ATen/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1208,7 +1208,7 @@
variants: function, method
dispatch:
CompositeExplicitAutograd: logical_xor
tags: pointwise
tags: [core, pointwise]

- func: logical_xor_(Tensor(a!) self, Tensor other) -> Tensor(a!)
device_check: NoCheck # TensorIterator
Expand Down Expand Up @@ -1919,6 +1919,7 @@
structured_delegate: cumsum.out
device_check: NoCheck # TensorIterator
variants: function, method
tags: core

- func: cumsum_(Tensor(a!) self, int dim, *, ScalarType? dtype=None) -> Tensor(a!)
structured_delegate: cumsum.out
Expand Down Expand Up @@ -2194,6 +2195,7 @@
CompositeExplicitAutograd: embedding_symint
NestedTensorCPU, NestedTensorCUDA: NestedTensor_embedding
autogen: embedding.out
tags: core

- func: embedding_backward(Tensor grad, Tensor indices, SymInt num_weights, SymInt padding_idx, bool scale_grad_by_freq, bool sparse) -> Tensor
dispatch:
Expand Down Expand Up @@ -2944,7 +2946,7 @@
variants: function, method
dispatch:
QuantizedCPU: quantized_index
tags: dynamic_output_shape
tags: [core, dynamic_output_shape]
# NB: This function is special-cased in tools/autograd/gen_variable_type.py
# NB: The following functions are declared in aten/src/ATen/templates/TensorBody.h and defined in aten/src/ATen/TensorIndexing.cpp:
# - Tensor Tensor::index(ArrayRef<TensorIndex> indices)
Expand Down Expand Up @@ -3005,6 +3007,7 @@
variants: function, method
dispatch:
CompositeExplicitAutograd: index_put
tags: core

- func: _unsafe_index_put(Tensor self, Tensor?[] indices, Tensor values, bool accumulate=False) -> Tensor
device_check: NoCheck # delegate to _index_put_impl_ after clone, which leverages TensorIterator
Expand Down Expand Up @@ -7866,7 +7869,7 @@
variants: method, function
dispatch:
CompositeExplicitAutograd: bitwise_and
tags: pointwise
tags: [core, pointwise]

- func: bitwise_and.Scalar_Tensor(Scalar self, Tensor other) -> Tensor
device_check: NoCheck # TensorIterator
Expand Down Expand Up @@ -7929,7 +7932,7 @@
- func: bitwise_or.Scalar(Tensor self, Scalar other) -> Tensor
device_check: NoCheck # TensorIterator
variants: method, function
tags: pointwise
tags: [core, pointwise]

- func: bitwise_or.Scalar_Tensor(Scalar self, Tensor other) -> Tensor
device_check: NoCheck # TensorIterator
Expand Down Expand Up @@ -7992,7 +7995,7 @@
- func: bitwise_xor.Scalar(Tensor self, Scalar other) -> Tensor
device_check: NoCheck # TensorIterator
variants: method, function
tags: pointwise
tags: [core, pointwise]

- func: bitwise_xor.Scalar_Tensor(Scalar self, Tensor other) -> Tensor
device_check: NoCheck # TensorIterator
Expand Down Expand Up @@ -9325,7 +9328,7 @@
variants: method, function
dispatch:
CompositeExplicitAutograd: fmod
tags: pointwise
tags: [core, pointwise]

- func: fmod_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
device_check: NoCheck # TensorIterator
Expand Down Expand Up @@ -9433,7 +9436,7 @@
variants: method, function
dispatch:
CompositeExplicitAutograd: remainder
tags: pointwise
tags: [core, pointwise]

- func: remainder_.Scalar(Tensor(a!) self, Scalar other) -> Tensor(a!)
variants: method
Expand Down Expand Up @@ -9682,6 +9685,7 @@
variants: method, function
dispatch:
SparseCPU, SparseCUDA: any_sparse
tags: core

- func: any.all_out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
device_check: NoCheck
Expand Down

0 comments on commit 69b08ba

Please sign in to comment.