Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge main into mlir-tcp #2465

Merged
merged 52 commits into from
Sep 15, 2023

Conversation

sjain-stanford
Copy link
Member

@sjain-stanford sjain-stanford commented Sep 14, 2023

Merges 3d974ed into mlir-tcp, resolves conflicts and fixes remaining mlir-hlo references in StablehloToTcp.

The corresponding hashes are:

stablehlo: https://github.com/openxla/stablehlo/tree/77a59815a82b34f7b08ed2d42a711d9920682d0e
llvm: https://github.com/llvm/llvm-project/tree/4acc3ffbb0af5631bc7916aeff3570f448899647

Note:

CI was stuck indefinitely on TCP e2e integration testing step, in particular on ElementwiseMulTensorComplexModule_basic... due to torch.complex64 type which we don't support yet (but also didn't catch in Tcp dialect's verifier). This test is not in TCP_PASS_SET so it is expected to fail. But in this case it is not failing, just stuck. It hit this llvm_unreachable but didn't fail, just stuck indefinitely and then does timeout at 6 hours.

Compiling ElementwiseMulTensorComplexModule_basic...
unsupported element type in createLinalgPayloadForElementwiseOp for tcp.mul
UNREACHABLE executed at /main_checkout/torch-mlir/externals/llvm-external-projects/torch-mlir-dialects/lib/Conversion/TcpToLinalg/Elementwise.cpp:178!

So for now I changed the type constraints for Tcp_MulOp in TcpOps.td from Tcp_Tensor to Tcp_FloatOrIntTensor, which causes this test to fail (expectedly).

Vremold and others added 30 commits August 27, 2023 21:56
…blehlo (llvm#2413)

* [Torch Dialect] emit aten.rand op and add converter to stablehlo

* add failed tests for torchdynamo backend

* add failed test for linalg backend
* fix value semantic return

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
* impl aten.min op

* remove extraneous test
* Support brevitas custom op (llvm#2320)

* f16 change for brevitas

* Adapt the change of brevitas quant custom op name

* Add unit tests

* Make brevitas conversions isolated

* Address the comments

---------

Co-authored-by: dan <danimal197@gmail.com>
* update

* update

* update

* update

* update

* update

* update
…lvm#2410)

* Tensor[]? support operands type support using partial codegen

* aten.index.Tensor support via partial codegen

* Add torch.index_put tracing support

* Added optional tensor list type support for LTC/TorchMLIR lowering

* Added comments

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Set PyTorch and TorchVision version to nightly release 2023-08-30.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
 - torch version: 2.1.0.dev20230831
 - torch commit hash: b5b99fe13b890232bb61155a46239922661f4695
 - torchvision version: 0.16.0.dev20230831
Set PyTorch and TorchVision version to nightly release 2023-09-01.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
* impl decomposition for aten.rand

* remove stablehlo conversion for aten.rand
* view_as_real test case, allow dtype in testutils.randn

* abstract python upstream func implemented

* fixed upstream dtype func, implemented view_as_real backend op

* formatted AtenViewAsRealOp, removed change in e2etest/framework

* removed test suit from reshape_like.py, because it's moved to basic.py

* implemented C-API wrapper for mlirComplexF128 type

* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict

* Changed IR input of aten fft_fft unit test

* code refactored

* code refactored and fixed ci test

* refactored: removed white spaces, and rolled back to having both input/output affine expr

* refactored: deleted output affine expr to reduce redundancy

* xfail ltc backend

* removed ComplexImag and ComplexReal from torchdynamo xfail set

* copied and pasted from main branch as there's no change to be made in this file

* refactored abstract_interp_lib_gen.py

* refactored: torchtypes.td, formatted, removed commented out code
* [Torch Dialect] support aten.split_with_sizes

* update
…lvm#2434)

* add ScalarImplicitOp's reverter to stablehlo backend

* add new passed test case for stablehlo backend
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Uses the existing reduction codepath, adding modifications or branches
required alongside for prod.
Found with a more strict set of warning flags on GCC 9.
Avoids unsupported pragma warning on GCC.
* emit aten.__or__TensorOp

* bug fix

* remove convert to stablehlo

* code style refinement
vivekkhandelwal1 and others added 20 commits September 6, 2023 22:36
Set PyTorch and TorchVision version to nightly release 2023-09-06.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
- torch version: 2.2.0.dev20230908
 - torch commit hash: 806d1a871ddfd2d38e1791489892009feaec8425
 - torchvision version: 0.17.0.dev20230908

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
- torch version: 2.2.0.dev20230909
 - torch commit hash: 11d2c766f14dae98296056ee16827ffbd6a4d509
 - torchvision version: 0.17.0.dev20230909

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Set PyTorch and TorchVision version to nightly release 2023-09-11.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
* implement aten.new_full

* remove extraneous tests
…llvm#2457)

* implemented e2e test case, shape, dtype func

* AtenEmptyStrided decompose op implemented

* xfailed test module in ltc
Corresponding commits:

* mlir-hlo: 16886a108eff5197f816ca0f1950cc5ff1b078d9
* stablehlo: 77a59815a82b34f7b08ed2d42a711d9920682d0e
* llvm-project: 4acc3ff

* Adapt to ByteCodeOpInterface changes.
* Adapt to RegionBranchPoint changes: https://reviews.llvm.org/D159116
* Adapt inferReturnTypes to get the value from properties.
* Adapt invalid.mlir to properties syntax
* [TOSA] Align with custom assembly format change.
* [TOSA] handle change of axis to int32 type
* [TOSA] Restore improper convert to i32

Landing with Windows broken (it cannot be fixed because of the way the mlir-hlo dep is inserted). Will followup with an untangling.
---------

Co-authored-by: TatWai Chong <tatwai.chong@arm.com>
Co-authored-by: Eric Kunze <eric.kunze@arm.com>
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
…lvm#2461)

At some point in the past month, stablehlo gained a number of patches that implement a non-trivial bit of threaded reference code. It fails to compile in Windows in pretty catastrophic ways.

But this isn't the main problem: by way of the MLIR CMake macros being used, if we include stablehlo before our code, we end up building the whole project, whether needed or not.
Set PyTorch and TorchVision version to nightly release 2023-09-13.
Ref: pytorch/pytorch@464f9c3

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
… (redo PR) (llvm#2459)

Making the same PR with llvm#2457, as I accidentally thought the review was already made and merged it (reverted).

Add decompose empty_strided op.
Referring to llvm#1776, this decomposition op only supports default stride values, because accessing the tensor or indexing over that, the indices are determined by the strides.
In MLIR, this is not implicitly supported but assumes that the strides are default while iterating over the tensor.
Copy link
Collaborator

@zezhang zezhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this!

@sjain-stanford
Copy link
Member Author

sjain-stanford commented Sep 14, 2023

Note to myself:

Once CI is clean, AVOID Squash and Merge on the GH UI, instead do a git merge locally and push to upstream mlir-tcp. This is needed since this repo doesn’t allow regular merge and we’d like to preserve the upstream commit history.

…ls the TCP e2e integration test for ElementwiseMulTensorComplexModule_basic...
@sjain-stanford sjain-stanford merged commit 4d90ab4 into llvm:mlir-tcp Sep 15, 2023
3 checks passed
@sjain-stanford sjain-stanford deleted the sambhav/upgrade_sep_14 branch November 28, 2023 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.