Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLIR-HLO] Missing legalization for mhlo.scatter to standard MLIR #60295

Open
dime10 opened this issue Apr 11, 2023 · 0 comments
Open

[MLIR-HLO] Missing legalization for mhlo.scatter to standard MLIR #60295

dime10 opened this issue Apr 11, 2023 · 0 comments
Assignees
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower type:support Support issues

Comments

@dime10
Copy link

dime10 commented Apr 11, 2023

Click to expand!

Issue Type

Support

Have you reproduced the bug with TF nightly?

No

Source

source

Tensorflow Version

main branch

Custom Code

Yes

OS Platform and Distribution

No response

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/Compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current Behaviour?

See below.

Standalone code to reproduce the issue

mhlo.scatter

Relevant log output

No response

Problem Statement

Is there a pass (sequence) that can lower the mhlo.scatter operation to standard MLIR dialects, such as linalg, tensor, arith, and/or scf?

The goal is to ultimately lower to the LLVM dialect and perform codegen with LLVM. I wasn't able to find a pass that converts the mhlo.scatter op out of the MLIR-HLO dialect domain. Most other ops can be converter via passes like --hlo-legalize-to-linalg, --mhlo-legalize-to-std, or --mhlo-legalize-control-flow.

(Duplicate of tensorflow/mlir-hlo#64 since I'm not sure that repository is monitored for issues.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower type:support Support issues
Projects
None yet
Development

No branches or pull requests

5 participants