Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ torch_xla/csrc/version.cpp
*.so

# Files autogenerated by scripts/generate_code.sh
torch_xla/csrc/aten_xla_type.h
torch_xla/csrc/aten_xla_type_default.h
torch_xla/csrc/aten_xla_type_default.cpp

Expand Down
13 changes: 7 additions & 6 deletions OP_LOWERING_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,14 @@ You should follow the instructions in [here](https://github.com/pytorch/xla/blob
You can find the definition of the C++ ATen operations in [native_functions.yaml](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml). After you build Pytorch/XLA from source, you will also find our default implementation (forward to PyTorch native CPU) in `xla/torch_xla/csrc/aten_xla_type_default.h/cpp`. Pytorch operations can usually be mapped to [PyTorch tensor api](https://pytorch.org/docs/stable/index.html) easily. If that is not the case searching the PyTorch native implementation under [PyTorch repo](https://github.com/pytorch/pytorch) is recommended. The goal is to lower the PyTorch operations into a sequence of XLA operations defined in [here](https://www.tensorflow.org/xla/operation_semantics).

## File structure
All file mentioned below lives under the `xla/torch_xla/csrc` folder
All file mentioned below lives under the `xla/torch_xla/csrc` folder, with the exception of `xla_native_functions.yaml`

1. `aten_xla_type_default.h/.cpp` are auto-generated by [this script](https://github.com/pytorch/xla/blob/master/scripts/gen.py) and contain our default implementation of the PyTorch operations. Functions in here will be used if lowering is not explicitly defined in `aten_xla_type.cpp`.
2. `aten_xla_type.h/.cpp` are entry points of PyTorch to the pytorch_xla world. We need to copy operation declarations from `aten_xla_type_default.h` to here and construct XLATensor using the input `at::Tensor` and other parameters. The resulting `XLATensor` needs to be converted back to the `at::Tensor` before returning to the PyTorch world.
3. `tensor.h` contains the `XLATensor` declarations. These declarations are one to one mapping of the `at::Tensor` nodes we declared in `aten_xla_type.h`
4. `tensor_methods.cpp` contains the implementation of `XLATensor node` defined in `tensor.h`. We constructed the corresponding `ir::op` from the parameter’s `ir::Value` and wrapped it inside a `XLATensor`. Ir stands for intermediate representation.
5. `ops/` directory contains all `ir::ops` declaration and definition. Smaller nodes can be put in `ops/ops.h/.cpp`. More complicated nodes can be put into a separate file. All ops inherit from `ir::ops::Node` and provide a way to lower input `ir::Value` to a sequence of `XlaOp`.
1. `xla_native_functions.yaml` contains the list of all operators that are lowered. Each operator name must directly match a pytorch operator listed in [native_functions.yaml](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml). This file serves as the interface to adding new xla operators, and is an input to PyTorch's [codegen machinery](https://github.com/pytorch/pytorch/blob/master/tools/codegen/gen_backend_stubs.py). It generates the below 3 files: `aten_xla_type.h`, `aten_xla_type_default.h`, and `aten_xla_type_default.cpp`
2. `aten_xla_type.h/.cpp` are entry points of PyTorch to the pytorch_xla world. `aten_xla_type.h` is auto-generated through a combination of `xla_native_functions.yaml` and the PyTorch core `native_functions.yaml` file, and contains declarations for kernels that need to be defined in `aten_xla_type.cpp`. The kernels written here need to construct 'XLATensor' using the input `at::Tensor` and other parameters. The resulting `XLATensor` needs to be converted back to the `at::Tensor` before returning to the PyTorch world.
3. `aten_xla_type_default.h/.cpp` are also auto-generated, and contain our default implementation of the PyTorch operations which simply fall back to the underlying CPU implementation. Functions in here will be used if lowering is not explicitly defined in `xla_native_functions.yaml` + `aten_xla_type.cpp`.
4. `tensor.h` contains the `XLATensor` declarations. These declarations are one to one mapping of the `at::Tensor` nodes we declared in `aten_xla_type.h`
5. `tensor_methods.cpp` contains the implementation of `XLATensor node` defined in `tensor.h`. We constructed the corresponding `ir::op` from the parameter’s `ir::Value` and wrapped it inside a `XLATensor`. Ir stands for intermediate representation.
6. `ops/` directory contains all `ir::ops` declaration and definition. Smaller nodes can be put in `ops/ops.h/.cpp`. More complicated nodes can be put into a separate file. All ops inherit from `ir::ops::Node` and provide a way to lower input `ir::Value` to a sequence of `XlaOp`.

## Unit Test
Our CircleCI runs PyTorch native python tests for every change and every day. Those tests will use XLA implementation if we provide a lowering. We usually don’t need to add additional python tests for PyTorch/XLA unless we want to verify some xla behaviors(like dynamic shape) or we skipped the pytorch native test for some reason. The python test should be added to `xla/test/test_operations.py` if it is required. We also need to add CPP tests in `xla/test/cpp/test_aten_xla_tensor.cpp`. This test should call PyTorch c++ API and verify our implementation yields the same result as PyTorch native implementation. We also need to verify if the xla implementation is called when the tensor is a XLA tensor by checking the `aten::op` and `xla::op` counters.
Expand Down
9 changes: 3 additions & 6 deletions scripts/generate_code.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@ if [ -z "$PT_INC_DIR" ]; then
PT_INC_DIR="$PTDIR/build/aten/src/ATen"
fi

python "$CDIR/gen.py" \
--gen_class_mode \
--output_folder="$XDIR/torch_xla/csrc" \
"$XDIR/torch_xla/csrc/aten_xla_type.h" \
"$PTDIR/build/aten/src/ATen/RegistrationDeclarations.h" \
"$PT_INC_DIR/Functions.h" \
python "$PTDIR/tools/codegen/gen_backend_stubs.py" \
--output_dir="$XDIR/torch_xla/csrc" \
--source_yaml="xla_native_functions.yaml"\
Loading