New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plumb through tensor.pack e2e execution for llvm-cpu backend. #11875
Conversation
6ac0629
to
eecbc94
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this test separate so that it is tested on vmvx path where it is used and vectorization isnt important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good. Nice! Just a comment on the tests.
Supporting the op on VMVX is my next step. Although vectorization is not used on VMVX side, it requires a memref version of pack/unpack. In this PR, I'd like to make sure that tensor version covers all the cases for llvm-cpu side. I'm preparing a separate PR that will bufferize tensor.pack/unpack into iree_linalg_ext.pack/unpack; will enable the test for vmvx. After that, I'll switch data tiling approach to use tensor.pack/unpack version, and deprecate iree_linalg_ext ops and transforms as much as possible. WDYT? |
Sounds good! |
…rg#11875) All the tensor.pack ops with static inner_tile_sizes are vectorized, which are all covered by e2e tests.
All the tensor.pack ops with static inner_tile_sizes are vectorized, which are all covered by e2e tests.
…rg#11875) All the tensor.pack ops with static inner_tile_sizes are vectorized, which are all covered by e2e tests.
All the tensor.pack ops with static inner_tile_sizes are vectorized, which are all covered by e2e tests.