Skip to content

Conversation

@sjarus
Copy link
Collaborator

@sjarus sjarus commented Jan 24, 2022

Refactor matmul into separate class and derive variants:

  • matmul
  • mm, bmm
  • linear

Signed-off-by: Suraj Sudhir suraj.sudhir@arm.com

@sjarus
Copy link
Collaborator Author

sjarus commented Jan 24, 2022

All bit accuracy tests for matmul variants passing:

Results PASS test_matmul_16_x16__f32
Results PASS test_matmul_16_x16x32__f32
Results PASS test_matmul_16x32_x32__f32
Results PASS test_matmul_16x32_x32x17__f32
Results PASS test_matmul_8x16x32_x32__f32
Results PASS test_matmul_8x16x32_x32x17__f32
Results PASS test_matmul_16x32_x8x32x17__f32
Results PASS test_matmul_16_x8x16x32__f32
Results PASS test_matmul_8x16x32_x8x32x17__f32
Results PASS test_matmul_4x8x16x32_x32__f32
Results PASS test_matmul_4x8x16x32_x32x17__f32
Results PASS test_matmul_4x8x16x32_x8x32x17__f32
Results PASS test_matmul_4x8x16x32_x1x32x17__f32
Results PASS test_matmul_16_x4x8x16x32__f32
Results PASS test_matmul_4x8x16x32_x4x8x32x17__f32
Results PASS test_matmul_4x8x16x32_x1x8x32x17__f32
Results PASS test_matmul_4x8x16x32_x1x1x32x17__f32
Results PASS test_mm_16x32_x32x17__f32
Results PASS test_bmm_8x16x32_x8x32x17__f32
Results PASS test_linear_5x3_f32
Results PASS test_linear_1x3_f32
PASS: 213, FAIL: 0

@sjarus sjarus requested a review from silvasean January 24, 2022 23:48
@sjarus
Copy link
Collaborator Author

sjarus commented Jan 24, 2022

@silvasean The build failure seems completely unrelated to anything in this push and I don't see it when I run 'cmake --build build --target check-torch-mlir' locally. Any idea what's happening ?

@silvasean
Copy link
Contributor

@silvasean The build failure seems completely unrelated to anything in this push and I don't see it when I run 'cmake --build build --target check-torch-mlir' locally. Any idea what's happening ?

It looks like an issue related to a newer version of PyTorch (and unrelated to your PR, though if you could fix it that would be much appreciated) -- the CI always uses the latest snapshot. You can try python -m pip install --upgrade -r requirements.txt to get the latest version of PT and reproduce the issue.

@cathyzhyi
Copy link
Contributor

FYI, the build failure should be fixed by this PR #536

@sjarus
Copy link
Collaborator Author

sjarus commented Jan 25, 2022

I just tried it; it does update deps but there's no build failure.

Successfully installed certifi-2021.10.8 charset-normalizer-2.0.10 idna-3.3 pybind11-2.9.0 requests-2.27.1 torch-1.11.0.dev20211231+cpu torchvision-0.12.0.dev20211231+cpu urllib3-1.26.8 wheel-0.37.1

@sjarus
Copy link
Collaborator Author

sjarus commented Jan 25, 2022

FYI, the build failure should be fixed by this PR #536

Thanks! Please feel free to look at this review too if you'd like - it just leverages the earlier matmul work to also implement torch.linear (just adding an rhs transpose and a bias add to the regular matmul process).

@sjarus
Copy link
Collaborator Author

sjarus commented Jan 25, 2022

I rebased after #536 was merged and it does fix this, thank you @cathyzhyi .

Refactor matmul into separate class and derive variants:
- matmul
- mm, bmm
- linear

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
@sjarus
Copy link
Collaborator Author

sjarus commented Jan 25, 2022

Nit feedback incorporated. Merging.

@sjarus sjarus merged commit cadea67 into llvm:main Jan 25, 2022
@sjarus sjarus deleted the tosa_updates branch January 25, 2022 16:49
@cathyzhyi
Copy link
Contributor

@sjarus Hey Suraj, sry I missed this in my review but some of the matchAndRewrite functions are not marked as override which result in compiler warning warning: 'matchAndRewrite' overrides a member function but is not marked 'override' [-Wsuggest-override] matchAndRewrite(AtenOpT op, OpAdaptor adaptor, could you fix those?

@sjarus
Copy link
Collaborator Author

sjarus commented Jan 25, 2022

Sure, I'm working on Conv2D now, but will roll this in along with it . Would that work ?

@cathyzhyi
Copy link
Contributor

Sure, I'm working on Conv2D now, but will roll this in along with it . Would that work ?

yea, that would work! thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants