Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][bufferization][NFC] Rename copy_tensor op to materialize_in_destination #65467

Merged
merged 1 commit into from
Sep 12, 2023

Conversation

matthias-springer
Copy link
Member

@matthias-springer matthias-springer commented Sep 6, 2023

The previous name was badly chosen. The op is used to ensure that a computation materializes in the future buffer of a certain tensor.

Depends On #65766. Only review the top commit.

Copy link
Member

@maerhart maerhart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that this is not just renaming but also relaxing the guarantees the op provides (guaranteed to lower to a memcpy vs only lowers to a memcpy if needed)?
I assume there are no users of this op that relied on the guarantee of it to lower to a memcpy?

Comment on lines +234 to +235
it could fold away, causing the computation to materialize in a different
buffer.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this lowers to a memcpy doesn't it also materialize in a different buffer? Do you maybe have a concrete example in mind that you could add here? Or explain in a bit more detail why materializing in a different buffer is a problem?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could lower to something like memref.memcpy %x, %x. That's the case when the source and the destination tensor are equivalent. E.g.:

%0 = arith.select %c, %t, %t
%1 = bufferization.materialize_in_destination %0 into %t

In the above example that's easy to see and it could fold away, but it may not obvious in more complex cases (e.g., with tiling, nested loops, etc.).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! That helped understanding a lot. Should we add it to the docs as well?

@matthias-springer
Copy link
Member Author

Do I understand correctly that this is not just renaming but also relaxing the guarantees the op provides (guaranteed to lower to a memcpy vs only lowers to a memcpy if needed)? I assume there are no users of this op that relied on the guarantee of it to lower to a memcpy?

The op still bufferizes to a memref.memcpy, but transformations such as "empty tensor elimination" may remove the op before the bufferization pass is run. That's why I removed the guaranteed part from the description; that's not the point of the op. We don't really care whether there is a memcpy or not. We just want to make sure that the result is in a certain buffer.

…stination

The previous name was badly chosen. The op is used to ensure that a computation materializes in the future buffer of a certain tensor.
@matthias-springer matthias-springer merged commit 91464e1 into llvm:main Sep 12, 2023
1 of 2 checks passed
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…stination (llvm#65467)

The previous name was badly chosen. The op is used to ensure that a
computation materializes in the future buffer of a certain tensor.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure mlir:core MLIR Core Infrastructure mlir:linalg mlir:tensor mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants