You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Migrate bazel torchxla
* Remove sndboxing for coverage execution
* Add missing files
* Remove build_torch_xla_libs mentions.
* Improve cache hits
* Dont separately require configuring test building; they are free now.
* format python
* fix run_tests.sh
* Pass test arguments via bazelrc
* Merge tests into a single target due to grpc address in use issues.
* Make testenv consistent for cache hits
* Remove abi logic, it's all in setup.py now
* Write both to log file and to output
* Update depreacated property
* add libpython to libs
* Change test filter flag
* Log file comment out for debugging
* Minimize downloads from cache
* Migrate to new bazel flag for exec propertieS
* Cache silo for CI
* set python version so that python3-config is found and used on circleci
* use ci cache silos when building
* simplify the silo flag
* improve silos
* Add conda init for tests
* format py
* hide the creds
* remove conda activation
* Setup conda library path
* Try improving conda setup
* Move the setup into bashrc
* common
* revert to old cache silo flag that allows overrides
* ormat py
* Revert to old style of specifying remote exec params
* Add bes timeout
* remove default silos key
* Rebase on updates
* pass in ld_lib_path to tests
* Propagate XLA_EXPERIMENTAL to bazel
* Support for cuda in tests
* Pass the cuda flag to cpp tests.
* remove cuda from deps of ptxla test since it's already in xla_client linked via xla_client:computation_client
* Fix multiconfiguration issues for tests
* Don't trim the tets config; test_filter remains
* Copy the codegen directory to get the source in docker
* Add libtpu to the wheel, and link accordingly
* include buildextensions; that redefines some disttools classes. python sucks.
* Update to cloud builder docker image and pass in the remote bazel flags
* Setup silo and remote cache for cloudbuild
* Set cache silo even with default creds
* fix debug flag
* Allow CXX_ABI flag to be set externally.
* Set instrumentatoin filter to avoid tests
* Document bazel
* User might be root often so make sure docs are clear
* format py
* Remove gen_lazy_tensor; now under codegen/
* Update documentation
* add coverage script
* Update docs with remote bazel role in gcp
* Update bazel docs
* Enable remote cache for bazel in ansible.
* Propagate default credentials to docker
* Remove unused rpath settings
* Upstream xla native functions
* Don't make the build DEBUG just for coverage.
* Avoid waiting for bes, which can be flaky
* Remove build-only testing
* Update xla native functions yaml
* Adjust cpp coverage stuff
* Use remote build for building tests.
* Debug mode
* Allow building tests
* Pass the TPU config to bazel tests.
Copy file name to clipboardExpand all lines: CODEGEN_MIGRATION_GUIDE.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,14 +30,14 @@ All file mentioned below lives under the `xla/torch_xla/csrc` folder, with the e
30
30
- Contains all the op XLA supported today. Most of the ops are under the supported category, the goal of this document is to move most of the ops to the full_codegen category.
31
31
- xla/scripts/gen_lazy_tensor.py
32
32
- Provides necessary XLA versions of the codegen Codegen class and calls the upstream codegen API.
- Result of the full_codegen column of the xla/xla_native_functions.yaml. The op function defined here will implement the op declared in the XLANativeFunctions.h. Each op will take at::tensor and return another at::tensor wrapped around a XLATensor.
35
-
- xla/torch_xla/csrc/generated/LazyIr.h
36
-
- Result of the full_codegen column of the xla/xla_native_functions.yaml. Defines the IR that is used to construct the full_codegen ops.
33
+
- xla/torch_xla/csrc/XLANativeFunctions.cpp
34
+
- Result of the full_codegen column of the xla/codegen/xla_native_functions.yaml. The op function defined here will implement the op declared in the XLANativeFunctions.h. Each op will take at::tensor and return another at::tensor wrapped around a XLATensor.
35
+
- xla/torch_xla/csrc/LazyIr.h
36
+
- Result of the full_codegen column of the xla/codegen/xla_native_functions.yaml. Defines the IR that is used to construct the full_codegen ops.
37
37
38
38
### PyTorch/XLA Old Op Lowering files
39
39
- xla/torch_xla/csrc/generated/aten_xla_type.cpp
40
-
- Manually implements ops defined in xla/xla_native_functions.yaml. Will be replaced by XLANativeFunctions.cpp
40
+
- Manually implements ops defined in xla/codegen/xla_native_functions.yaml. Will be replaced by XLANativeFunctions.cpp
41
41
- xla/torch_xla/csrc/generated/tensor.h
42
42
- Defines XLATensor class and XLATensor method declarations. These declarations are usually a one to one mapping of the at::Tensor nodes we declared in XLANativeFunctions.h. XLATensor method will be removed for full_codegen ops
### 2. Codegen the op and inspect the generated file
79
-
Find the op in `xla/xla_native_functions.yaml` and move it to the full_codegen column and run `python setup.py install` under xla directory again. The build will fail (reason explained later in this guide) but you can still see the generated file. The code snippets below uses `abs` as an example.
79
+
Find the op in `xla/codegen/xla_native_functions.yaml` and move it to the full_codegen column and run `python setup.py install` under xla directory again. The build will fail (reason explained later in this guide) but you can still see the generated file. The code snippets below uses `abs` as an example.
0 commit comments