-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TCP] Merge main into mlir_tcp #2310
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
The GitHub action for creating the PR expects that either the changes are not committed (in which case it commits them with the specified commit message) or that the commit exists but that it is also pushed to remote. Prior to this patch, we created the commit but did not push it to remote, causing failures. This patch leaves the changes uncommitted so that they're committed and pushed to remote as part of the PR creation.
Co-authored-by: Sean Silva <silvasean@google.com>
- torch version: 2.1.0.dev20230512 - torch commit hash: 1a3d3669efa55e3360060c9b81f87900ae0c906c - torchvision version: 0.16.0.dev20230512 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
This patch, by itself, doesn't fix caching on Windows, but once a new release of ccache is available, caching for Windows builds should start working again (validated by building ccache from source and using it with LLVM builds). Ccache rejects caching when either the `/Zi` or `/ZI` flags are used during compilation on Windows, since these flags tell the compiler to embed debug information in a PDB file (separate from the object file produced by the compiler). In particular, our CI builds add the `/Zi` flag, making ccache mark these compiler invocations as uncacheable. But what caused our CI to add debug flags, especially when we specified `-DCMAKE_BUILD_TYPE=Release`? On Windows, unless we specify the `--config Release` flag during the CMake build step, CMake assumes a debug build. So all this while, we had been producing debug builds of torch-mlir for every PR! No doubt it took so long to build the Windows binaries. The reason for having to specify the configuration during the _build_ step (as opposed to the _configure_ step) of CMake on Windows is that CMake's Visual Studio generators will produce _both_ Release and Debug profiles during the CMake configure step (thus requiring a build-time value that tells CMake whether to build in Release or Debug mode). Luckily, on Linux and macOS, the `--config` flag seems to be simply ignored, instead of causing build errors. Strangely, based on cursory tests, it seems like on Windows we need to specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as well as `--config Release`. Dropping either made my build switch to a Debug configuration. Additionally, there is a bug in ccache v4.8 (although this is addressed in trunk) that causes ccache to reject caching if the compiler invocation includes any flag that starts with `/Z`, including /`Zc`, which is added by LLVM's HandleLLVMOptions.cmake and which isn't related to debug info or PDB files. The next release of ccache should include the fix, which is to reject caching only for `/Zi` and `/ZI` flags and not all flags that start with `/Z`. As a side note, debugging this problem was possible because of ccache's log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
This commit adds dtype functions for all the torch ops that did not previously have one and removes the pass `RefineTypes`, since the abstract interpretation library now takes care of all the dtype propagation. All dtype functions added are tested except for - `aten.embedding` - `aten._embedding_bag` - `aten.embedding_bag` These functions need a change to the testing framework to allow specifying the actual data inside the tensor used for testing. I will fix this in a follow up patch. Co-authored-by: Jiahao Li <liplus17@163.com>
We previously used a fork of the action/cache repository for the PyTorch cache since the actions/cache repo did not support read-only caches. Now that actions/cache supports separate read and write steps, this patch switches back to the actions/cache repo.
llvm#2094) * [Torch Dialect] require dtype exists when decompose to aten.where.self * update
Set PyTorch and TorchVision version to nightly release 2023-05-16. Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
…m#2122) Lowering torch operations that allow different compatible data types in its operands to tosa end up generating invalid tosa IR with mixed data types. In tosa spec, certain operations (generally element-wise operations) require all operands to have the same data type. Add wrapper functions for those element-wise tosa ops to perform op creation with type conversion if necessary.
* add empty conversion * clean up * add tests --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
* add unbind int * reformat * use unpack canonicalize * address comments * Empty commit, trigger test * add ltc blacklist * clean up * address comments * check permute list * erase in recompose --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
- torch version: 2.1.0.dev20230519 - torch commit hash: 61239df555df02e8c60a2ad63363878a2a57c161 - torchvision version: 0.16.0.dev20230519 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Since PRs created by the GitHub action bot cannot trigger workflows (and thus build tests), this patch uses the token for a GitHub app that was specifically created for the RollPyTorch action.
- torch version: 2.1.0.dev20230522 - torch commit hash: 871fc7bb76f05c3c487214404f687cf7a6a8e453 - torchvision version: 0.16.0.dev20230522 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* fix torch_c.to_i64 * restore dialect.cpp * Empty commit, trigger test * Empty commit, trigger test * fix uint case * address comments * update error msg * clean up * use i64 for ConstantIntOp * use I64Attr --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
This commit adds ODS for the aten.sign op. Signed-Off-By: Prateek Gupta <prateek.gupta2@cerebras.net>
- torch version: 2.1.0.dev20230523 - torch commit hash: 981d4c2578d10d8a96d173471802fc2812541fb1 - torchvision version: 0.16.0.dev20230523 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* add split.tensor support + recompose rules * add e2e test * address comments * address comments * erase op in recomposeOp --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
* Add AtenIndexTensor StableHlo support * clean up * Empty commit, trigger test * try to debug hanging test * fix segfulat * fix bad include --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
Tested on Ubuntu 23.04 on Ampere Altra instance.
* add uniform stablehlo lowering * add unit test * new line * rm redundant file * Empty commit, trigger test * fix include * address comments --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
- torch version: 2.1.0.dev20230525 - torch commit hash: eb2ef134b4e834a9b8a8b6de86ddd7d2780ce0ac - torchvision version: 0.16.0.dev20230525 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230628 - torch commit hash: 94ca800459ebe8cd2bc3a9927a8412d958661634 - torchvision version: 0.16.0.dev20230628 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Canonicalize aten.to.other to prim.device + prim.dtype + aten.to.device Co-authored-by: wujiawei.aml <wujiawei.aml@bytedance.com>
- torch version: 2.1.0.dev20230630 - torch commit hash: dc72046b235ac803e3875c23a1784e93b3d4812c - torchvision version: 0.16.0.dev20230630 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230701 - torch commit hash: bb3df0bb7c6bce70941199401f6b3550e10cba50 - torchvision version: 0.16.0.dev20230701 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230701 - torch commit hash: bb3df0bb7c6bce70941199401f6b3550e10cba50 - torchvision version: 0.16.0.dev20230702 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230704 - torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c - torchvision version: 0.16.0.dev20230704 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230705 - torch commit hash: 758c84d41f55f90f210e6d7d02e05cda4a13c728 - torchvision version: 0.16.0.dev20230705 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
…lvm#2174) -- This commit adds support for dynamic dimension in BroadcastTo op. Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
- torch version: 2.1.0.dev20230707 - torch commit hash: 760dafbb05853f5f57f1a6869179df2efbc2cf6b - torchvision version: 0.16.0.dev20230707 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230708 - torch commit hash: 3a919e00b8237a76ad6faa6040c00b425a96f1f3 - torchvision version: 0.16.0.dev20230708 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230709 - torch commit hash: 9b5a84f5443c8e3b9db5511a4f58d727b4fade40 - torchvision version: 0.16.0.dev20230709 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* remove cpu check * update dtype --------- Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
- torch version: 2.1.0.dev20230710 - torch commit hash: 69565763c841e4e8d07fd338c9bf6515005b3880 - torchvision version: 0.16.0.dev20230710 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230711 - torch commit hash: 927dc662386af052018212c7d01309a506fc94cd - torchvision version: 0.16.0.dev20230711 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* Add make_fx_tosa variant to end2end tests * e2e_testing/xfail_sets.py: Add make_fx_tosa xfail for stable
- torch version: 2.1.0.dev20230713 - torch commit hash: fccac344dff905c235681c7eb1b567d45f45edb6 - torchvision version: 0.16.0.dev20230713 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230714 - torch commit hash: d257917ad4e5bb1b848f7857026191b61efb2294 - torchvision version: 0.16.0.dev20230714 Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As titled.