Skip to content

Commit

Permalink
[mlir] Make tensor_to_memref op docs match reality
Browse files Browse the repository at this point in the history
The previous code defined it as allocating a new memref for its result.
However, this is not how it is treated by the dialect conversion framework,
that does the equivalent of inserting and folding it away internally
(even independent of any canonicalization patterns that we have
defined).

The semantics as they were previously written were also very
constraining: Nontrivial analysis is needed to prove that the new
allocation isn't needed for correctness (e.g. to avoid aliasing).
By removing those semantics, we avoid losing that information.

Differential Revision: https://reviews.llvm.org/D91382
  • Loading branch information
silvasean committed Nov 12, 2020
1 parent faa66b1 commit 7968802
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions mlir/include/mlir/Dialect/StandardOps/IR/Ops.td
Expand Up @@ -3737,28 +3737,32 @@ def TensorToMemrefOp : Std_Op<"tensor_to_memref",
"getTensorTypeFromMemRefType($_self)">]> {
let summary = "tensor to memref operation";
let description = [{
Create a memref from a tensor. This is equivalent to allocating a new
memref of the appropriate (possibly dynamic) shape, and then copying the
elements (as if by a tensor_store op) into the newly allocated memref.
Create a memref from a tensor. This is a transient op created as a
materialization during type conversions between tensors and memrefs.

The opposite of this op is tensor_load. Together, these two ops are useful
for source/target materializations when doing type conversions involving
tensors and memrefs.

This op is defined by the fold
`tensor_to_memref(tensor_load(%memref)) -> %memref`, which is the property
that makes it a valid materialization in the type conversion framework.
This implies that one cannot assume that this op allocates a new memref for
its result.

Note: This op takes the memref type in its pretty form because the tensor
type can always be inferred from the memref type, but the reverse is not
true. For example, the memref might have a layout map or memory space which
cannot be inferred from the tensor type.

```mlir
// Result type is tensor<4x?xf32>
%12 = tensor_to_memref %10 : memref<4x?xf32, #map0, 42>
%12 = tensor_to_memref %10 : memref<4x?xf32, #map0, 42>
```
}];

let arguments = (ins AnyTensor:$tensor);
let results = (outs Res<AnyRankedOrUnrankedMemRef,
"the memref to create", [MemAlloc]>:$memref);
let results = (outs AnyRankedOrUnrankedMemRef:$memref);
// This op is fully verified by traits.
let verifier = ?;

Expand Down

0 comments on commit 7968802

Please sign in to comment.