Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][bufferization] OneShotBufferize broken when using defaultMemorySpaceFn #91518

Open
christopherbate opened this issue May 8, 2024 · 1 comment
Labels
mlir:bufferization Bufferization infrastructure

Comments

@christopherbate
Copy link
Contributor

christopherbate commented May 8, 2024

A change from February allowed callers of OneShotBufferize to set the default memory space from the TensorType of a value. See PR here: #78484. According to the PR, this is used to allow implementing a mapping from tensor encoding attribute to memref memory space.

This change is a nice feature, however it is quite broken outside of limited use. In particular, it doesn't play nicely with bufferization.alloc_tensor|materialize_in_destination|to_tensor|to_memref and produces unexpected/wrong results.

I will post a range of example IR where using this feature produces unexpected/non-sense results when running one-shot-bufferize, but the crux of the issue is that there is limited support with dealing situations like the following:

  1. bufferization.materialize_in_destination where the encoding on the source and destination tensors map to different memory spaces
  2. bufferization.alloc_tensor expects the copy and result types to match, but this is at odds with using the tensor type encoding to set the memory space. It can also fail if the tensor type encoding and the memory_space attribute on the op are different.

I took a shot at fixing this and believe that I have corrected all issues satisfactory (#91524).

@github-actions github-actions bot added the mlir label May 8, 2024
@EugeneZelenko EugeneZelenko added mlir:bufferization Bufferization infrastructure and removed mlir labels May 8, 2024
christopherbate added a commit to christopherbate/llvm-project that referenced this issue May 8, 2024
…` is used

As mentioned in the issue described in issue llvm#91518, a previous
PR llvm#78484 introduced the `defaultMemorySpaceFn` into bufferization
options, allowing one to inform OneShotBufferize that it should use a specified
function to derive the memory space attribute from the encoding attribute attached
to tensor types.

However, introducing this feature exposed a unhandled edge cases, examples of which
are introduced by this change in the new test under
`test/Dialect/Bufferization/Transforms/one-shot-bufferize-encodings.mlir`.

Fixing the inconsistencies introduced by `defaultMemorySpaceFn` is pretty
simple. This change:

- updates the `bufferization.to_memref` and `bufferization.to_tensor` operations
  to explicitly include operand and destination types, whereas previously they
  relied on type inference to deduce the tensor types. Since the type inference
  cannot recover the correct tensor encoding/memory space, the operand and result
  types must be explicitly included.
- makes minor updates to other bufferization functions to handle the
  changes in building the above ops
- updates bufferization of `tensor.from_elements` to handle memory space
@christopherbate
Copy link
Contributor Author

Examples (introduced as tests in the PR): https://github.com/llvm/llvm-project/blob/80fe1bbd1d49b0af792fd38d04782ed1601e0222/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-encodings.mlir

The PR adds an option to set defaultMemorySpaceFn to use tensor encoding from the one-shot-bufferize pass options. You need that in order to run the examples and see the existing issues at HEAD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure
Projects
None yet
Development

No branches or pull requests

2 participants