-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using torch.overwrite.tensor.contents to overwrite input argument fails at runtime #17316
Comments
I don't think this is going to work - we have memcpy semantics and such an operation requires memmove. You must ensure when using in-place operations on in-place calls that you aren't trying to do a memmove. |
Oh, I might be hitting the same issue on nod-ai/SHARK-Platform#22 (comment) (among other things) |
Probably something that's going to need to be identified/fixed in the frontend - we can't really support memmove in general and need to ensure we aren't generating programs that require it. Not only are there no memmove DMA primitives in hardware (there's no cuMemmove, or vkCmdMoveBuffer, etc) but ensuring all dispatches that we generate have memmove semantics when operating in place is not possible. |
I only see one of these ops in the program I'm looking at. Trying to find where it's coming from... maybe this: https://github.com/llvm/torch-mlir/blob/ec6d7aa5d28f110aa5b893e16e502e6198988801/python/torch_mlir/extras/fx_importer.py#L1111-L1118 ? |
Probably - something @stellaraccident may have a pointer to/thoughts about - I'm not sure what the right solution is at that level. We can't do much in IREE as the only safe behavior is to silently insert copies for any externally-provided buffer and that defeats the purpose of the in-place ops. I'm not sure if analysis at the torch level could insert the copies, warn/error if they're required, or what :/ (this is one of the reasons we default to not doing in-place operations - they've got footguns :) |
I think my eyes aren't calibrated right to see where this is becoming a move-like thing. Probably need some help figuring out how to structure it. |
I don't know torch stuff (vtensor? what?), but maybe it's whatever to_vtensor is? If to_vtensor must remain a clone and doesn't end up as a flow.tensor.clone then the above will happen. Even if it does become a clone maybe we're dropping it later and that's something we could fix locally but will be trickier in more complex programs to prove. A print-after-all would be useful. |
Chatting with Ben, the repro is not strictly legal (it is returning a value that is in-placed). While we could support that form, it is presently a limitation. Also, I think it is a testing artifact. We should work on this case:
|
notes to tomorrow me:
|
This fixes a design issue in the original `hal.tensor.export` optional storage feature that would lead to the export happening after any `hal.tensor.barrier` ops that may have been used on the source tensor. The new op is intended to be inserted prior to the barriers and can also be inserted elsewhere (not just at ABI boundaries). Minor improvements were required to folding of `stream.async.update` in order to ensure the aliased buffers are used in cases where barriers are present between producers and the alias ops consuming the values. #17135 made the folder too conservative and would result in all in-place operations of external values getting extra copies. Fixes #17316.
…7339) This fixes a design issue in the original `hal.tensor.export` optional storage feature that would lead to the export happening after any `hal.tensor.barrier` ops that may have been used on the source tensor. The new op is intended to be inserted prior to the barriers and can also be inserted elsewhere (not just at ABI boundaries). Minor improvements were required to folding of `stream.async.update` in order to ensure the aliased buffers are used in cases where barriers are present between producers and the alias ops consuming the values. iree-org#17135 made the folder too conservative and would result in all in-place operations of external values getting extra copies. Fixes iree-org#17316.
…7339) This fixes a design issue in the original `hal.tensor.export` optional storage feature that would lead to the export happening after any `hal.tensor.barrier` ops that may have been used on the source tensor. The new op is intended to be inserted prior to the barriers and can also be inserted elsewhere (not just at ABI boundaries). Minor improvements were required to folding of `stream.async.update` in order to ensure the aliased buffers are used in cases where barriers are present between producers and the alias ops consuming the values. iree-org#17135 made the folder too conservative and would result in all in-place operations of external values getting extra copies. Fixes iree-org#17316. Signed-off-by: Lubo Litchev <lubol@google.com>
What happened?
The lowering comes from when we use index_copy_ to try to update an input argument's values. When trying to run the vmfb after compiling index_copy_repro.mlir, I get this error:
Steps to reproduce your issue
../iree-build/tools/iree-compile --iree-input-type=torch --iree-vm-bytecode-module-output-format=flatbuffer-binary --iree-hal-target-backends=rocm --mlir-print-debuginfo --mlir-print-op-on-diagnostic=false --iree-hal-target-backends=rocm --iree-rocm-target-chip=gfx940 --iree-opt-const-eval=false --iree-rocm-bc-dir=/opt/rocm/amdgcn/bitcode --iree-opt-strip-assertions=true ../index_copy_repro.mlir -o index_copy_repro.vmfb
../iree-build/tools/iree-run-module --module=llama_v4.vmfb --device=rocm --function=test_index_copy --input=8192x16x8x128xf32 --input=4xi64 --input=4x16x8x128xf32 --output=@output.npy EXEC @test_index_copy
What component(s) does this issue relate to?
Runtime
Version information
355f56b
Additional context
No response
The text was updated successfully, but these errors were encountered: