Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge branch 'main' into ze.zhang/merge_main #2420

Merged
merged 86 commits into from
Aug 26, 2023

Conversation

zezhang
Copy link
Collaborator

@zezhang zezhang commented Aug 26, 2023

mlir-tcp update with torch-mlir main branch

silvasean and others added 30 commits July 15, 2023 09:26
- torch version: 2.1.0.dev20230715
 - torch commit hash: 6db8e8b9b7ae2232c3ab0eb7fe19830357695c7d
 - torchvision version: 0.16.0.dev20230715

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230716
 - torch commit hash: c69b6e5da6f5892c2b2bd5fbf28dd5b568de362f
 - torchvision version: 0.16.0.dev20230716

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230717
 - torch commit hash: c437a4b1e0da5c00c15c983fecfeedb81b2355f5
 - torchvision version: 0.16.0.dev20230717

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Add e2e support by add  "tosa-to-scf"
…lvm#2309)

In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
This can happen when the input comes from an unsupported operator
- torch version: 2.1.0.dev20230718
 - torch commit hash: 5e128c4fa1f1217e30c7179aeb5eb5eb95d4dd70
 - torchvision version: 0.16.0.dev20230718

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* explicit inliner extension

* fixed import formatting
- torch version: 2.1.0.dev20230719
 - torch commit hash: 82e03ad95768645f27100929366530f5d62deffe
 - torchvision version: 0.16.0.dev20230719

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
- torch version: 2.1.0.dev20230720
 - torch commit hash: a16c87a767b22dbfa9e9435b1efe699db377ebf5
 - torchvision version: 0.16.0.dev20230720

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
- torch version: 2.1.0.dev20230721
 - torch commit hash: f228c8b8cac3db634516c7101dee077cbaa026ab
 - torchvision version: 0.16.0.dev20230721

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230722
 - torch commit hash: b5222f140da05e40ac90ff42bd1db6564343daff
 - torchvision version: 0.16.0.dev20230722

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230723
 - torch commit hash: a060bf3cf05c09906e78d7299efc8184568ea2e1
 - torchvision version: 0.16.0.dev20230723

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230724
 - torch commit hash: ba1da8199b3077b77a78a78e7f0dad166435182f
 - torchvision version: 0.16.0.dev20230724

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
…lvm#2332)

Doing `module.to('lazy')` only moves the module member tensors to the
device if they are created with `self.register_buffer` or
`self.register_parameter`. Since the `self.tensor` tensor in
`Add_Module` test is currently not created using the `self.register_*`
methods, it is not being moved from CPU to lazy device, which is
causing the test to fail on LTC backend. This commit uses
`self.register_buffer` to fix the test on LTC backend.

This commit also seems to fix the test for torchdynamo.
…tatic op (llvm#2338)

By the way, this PR also adds the missing shape function for aten.masked_select.
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
I saw test failing when FileCheck wasn't already build
vivekkhandelwal1 and others added 28 commits August 8, 2023 21:54
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
- torch version: 2.1.0.dev20230808
 - torch commit hash: c01a41cdec4414d8853c8474ddcaf2bd6990e5c8
 - torchvision version: 0.16.0.dev20230808

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:

llvm-project: f580901d5d30e37755212f1c09e5b587587fbfeb
mlir-hlo: 503736d156c25022813c51cbdbe3b862d67a6916
Set PyTorch and TorchVision version to nightly release 2023-08-10.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
- torch version: 2.1.0.dev20230811
 - torch commit hash: 422297f87fc25191bb392486c4bb8d25c4785d15
 - torchvision version: 0.16.0.dev20230811

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`.  The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.
- torch version: 2.1.0.dev20230812
 - torch commit hash: c9397a7bc833cdfdf64aa023631ae5e1c7e9cee4
 - torchvision version: 0.16.0.dev20230812

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230813
 - torch commit hash: 3748ee4a8c4032dac08bd2de0ebf039ad22e0d1e
 - torchvision version: 0.16.0.dev20230813

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230814
 - torch commit hash: 53551b5c87ca582d71d4bbaf82050d05c3c2f534
 - torchvision version: 0.16.0.dev20230814

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
…nually generating it (llvm#2344)

* [Torch Dialect] replace none-index in aten.Index.Tensor's  param by manually generating it
Co-authored-by: Jiawei Wu <wujiawei.aml@bytedance.com>
Co-authored-by: Jianzhe Xiao <jianzhe.xiao@bytedance.com>

* minor typo fix

* add new failed e2e tests for ltc

* fix typo

* Address comments

* Add more e2e tests

* add failed e2e tests for LTC

* address comments

* remove decomposition for AtenIndexTensorHackedTwinOp
- torch version: 2.1.0.dev20230815
 - torch commit hash: e4d5143f8c73014521f44c3e9b46c642a300dd2f
 - torchvision version: 0.16.0.dev20230815

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:

llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009

Changes made:

- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
- torch version: 2.1.0.dev20230816
 - torch commit hash: 3af011b858f5e5c40fd8e9d41fa7f31a928b3b47
 - torchvision version: 0.16.0.dev20230816

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230817
 - torch commit hash: 3522f2a7b7f73e928a8366cb7bd62ab3883dbe75
 - torchvision version: 0.16.0.dev20230817

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* [TOSA] Fix conversion for depthwise convolutions

* Add e2e tests for depthwise and grouped convolutions

Co-authored-by: Lucas Camphausen <lucas.camphausen@iml.fraunhofer.de>
…m#2403)

Sean has decided to move on to other ventures and has requested that I help him disengage by resuming top level accountability for the project.
- torch version: 2.1.0.dev20230819
 - torch commit hash: 668af075012c0857053a7cdf7ca764bb3569c6f1
 - torchvision version: 0.16.0.dev20230819

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.1.0.dev20230820
 - torch commit hash: 4ce227bfb953d1f64c4d86cc913144ee2a210e57
 - torchvision version: 0.16.0.dev20230820

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
* LTC/TorchMLIR multi-output operations support

* Update torch-mlir jit lowering to support ops with dynamic number of outputs

* Added support for aten::split_copy, aten::split_with_sizes_copy

* Fix native function for aten::split; cleanup code

* Fix TorchMlirTensorList lowering

* Remove xfails
* [Stablehlo Dialect] fix lowering bn inference with mixed types

* update
* [LTC] Add shape_inference_(add|uniform)

* Add torch.multinomial op.

* Update ods gen; add normal_functional and erfinv ops support

* New TorchMLIR ops: clamp_min.Tensor, clamp_max.Tensor, xlogy, binary_cross_entropy, log_sigmoid_forward, sigmoid_backward, cosine_embedding_loss, scatter.reduce

* Improve the shape inference logic of whereOp

- Infer the result tensor according to the broadcasting semantics

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Added aten::sgn

* Add shape inference logic for hardtanh_backward op

* Added new Torch-MLIR ops

Co-authored-by: GlebKazantaev <gleb.nnstu@gmail.com>

* Add support for elu lowering

* Add support for elu_backward lowering

* Support fmod, remainder, and floor_divide

Emit generated op defs for the remainder.Tensor and fmod.Tensor

Add shape inference impelementations for remainder.Scalar, fmod.Scalar
and floor_divide.Tensor

* Add shape inference logic for im2col

- pytorch.nn.unfold gets decomposed into im2col

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Add aten::eye and aten::eye.m support

* Add tracing for linalg_qr

* Update GeneratedTorchOps.td

* Update xfails

* Fix unbound variable issue in torch_ods_gen

---------

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: zihaoc-cerebras <zihao.chen@cerebras.net>
Co-authored-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Gokul Ramakrishnan <gokul.ramakrishnan@cerebras.net>
Co-authored-by: glebk-cerebras <111300564+glebk-cerebras@users.noreply.github.com>
Co-authored-by: Behzad Abghari <behzad.abghari@gmail.com>
Co-authored-by: Ahmed Elkoushy <ahmed.elkoushy@cerebras.net>
This way, we can keep CI green without being forced to ignore _all_
errors that arise in stable PyTorch builds
@zezhang zezhang merged commit 275697d into llvm:mlir-tcp Aug 26, 2023
3 checks passed
@zezhang zezhang deleted the ze.zhang/merge_main branch August 26, 2023 00:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet