-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][bufferization][NFC] Rename copy_tensor op to materialize_in_destination #65467
[mlir][bufferization][NFC] Rename copy_tensor op to materialize_in_destination #65467
Conversation
f951aa0
to
55eab9f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that this is not just renaming but also relaxing the guarantees the op provides (guaranteed to lower to a memcpy vs only lowers to a memcpy if needed)?
I assume there are no users of this op that relied on the guarantee of it to lower to a memcpy?
it could fold away, causing the computation to materialize in a different | ||
buffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this lowers to a memcpy doesn't it also materialize in a different buffer? Do you maybe have a concrete example in mind that you could add here? Or explain in a bit more detail why materializing in a different buffer is a problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could lower to something like memref.memcpy %x, %x
. That's the case when the source and the destination tensor are equivalent. E.g.:
%0 = arith.select %c, %t, %t
%1 = bufferization.materialize_in_destination %0 into %t
In the above example that's easy to see and it could fold away, but it may not obvious in more complex cases (e.g., with tiling, nested loops, etc.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! That helped understanding a lot. Should we add it to the docs as well?
The op still bufferizes to a |
…stination The previous name was badly chosen. The op is used to ensure that a computation materializes in the future buffer of a certain tensor.
55eab9f
to
20b1432
Compare
…stination (llvm#65467) The previous name was badly chosen. The op is used to ensure that a computation materializes in the future buffer of a certain tensor.
The previous name was badly chosen. The op is used to ensure that a computation materializes in the future buffer of a certain tensor.
Depends On #65766. Only review the top commit.