Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use upstream matmul pack #911

Merged
merged 1 commit into from
May 23, 2024
Merged

Conversation

adam-smnk
Copy link
Collaborator

@adam-smnk adam-smnk commented May 14, 2024

Retires downstream matmul packing logic and uses upstream block pack matmul instead.

@adam-smnk adam-smnk marked this pull request as draft May 14, 2024 16:02
@adam-smnk
Copy link
Collaborator Author

Identity permutation outer_dims_perm = [0, 1] in tensor.pack introduced by the upstream matmul packing prevents folding pairs of tensor.unpack and tensor.pack. Which then prevents complete layout propagation throughout IR graph.

Investigation in progress.

@adam-smnk
Copy link
Collaborator Author

Fixed upstream. This PR should be able to continue after the next LLVM bump.

Retires downstream matmul packing logic and uses upstream block pack
matmul instead.
@adam-smnk adam-smnk changed the title WIP upstream pack matmul Use upstream matmul pack May 23, 2024
@adam-smnk adam-smnk marked this pull request as ready for review May 23, 2024 09:30
@adam-smnk adam-smnk added the benchmark Triggers benchmark jobs label May 23, 2024
@adam-smnk
Copy link
Collaborator Author

All works as before now. No regression in performance.

@adam-smnk adam-smnk merged commit ec38d1d into plaidml:main May 23, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Triggers benchmark jobs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants