Skip to content

[MetaSchedule] Fuse loops around shared to global store block in MultiLevelTilingTensorCore#13357

Merged
masahi merged 2 commits intoapache:mainfrom
masahi:ms-tc-fuse-write-reuse
Nov 11, 2022
Merged

[MetaSchedule] Fuse loops around shared to global store block in MultiLevelTilingTensorCore#13357
masahi merged 2 commits intoapache:mainfrom
masahi:ms-tc-fuse-write-reuse

Conversation

@masahi
Copy link
Member

@masahi masahi commented Nov 11, 2022

Currently, vectorization of shared to global store in tensor core auto tensorization is not done properly, since most blocks have the T.where predicate which disables vectorization.

The predicate is introduced after Split in cooperative fetch: https://github.com/apache/tvm/blob/main/src/meta_schedule/postproc/rewrite_cooperative_fetch.cc#L159-L162
As the code says, this split is supposed to be applied to a fused loop. This is the case for cache read blocks, where AddReadReuse explicitly fuses loops around cache read blocks. But AddWriteReuseTensorCore doesn't fuse loops after cache write: https://github.com/apache/tvm/blob/main/src/meta_schedule/schedule_rule/multi_level_tiling_tensor_core.cc#L260-L262.

So for cache rewrite blocks, we always try to split a single axis by large factors like [None, 4, 32, 2]. Unless the sampled factor for the axis is large, we always get T.where in the shared to global copy block.

This PR adds the missing fusion. Now, all candidate samples have the shared to global copy block properly vectorized. But unfortunately, there was no perf improvement from this change after e2e tuning.

For quantized workloads, vectorization of shared to global copy is disabled, since we end up vectorizing also requantization-related math, involving 64 bit arithmetic. The generated code fails to compile currently.

@vinx13 @junrushao

@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 11, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@masahi masahi merged commit 5364e5a into apache:main Nov 11, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…tiLevelTilingTensorCore` (apache#13357)

* Fuse shared to global store loops in MultiLevelTilingTensorCore

* update test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants