-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate llvm/llvm-project@27ac46e6bea2 #17662
Conversation
Abbreviated Benchmark Summary@ commit b918d15fd1fa968cb5c401910f67edd7cd702c58 (vs. base f4279657ef8da12d07f068a37cbd93986edb47d8) Data-Tiling Comparison TableClick to show
Regressed Latencies 🚩
[Top 3 out of 4 results showed] Improved Latencies 🎉
[Top 3 out of 21 results showed] Improved Total Dispatch Sizes 🎉
[Top 3 out of 6 results showed] Regressed Stream IR Dispatch Count (# of cmd.dispatch ops) 🚩
[Top 3 out of 10 results showed] Improved Stream IR Dispatch Count (# of cmd.dispatch ops) 🎉
[Top 3 out of 6 results showed] For more information: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to look at the regressions in number of dispatches. I can help (but not today).
This seems like a fairly likely candidate for the source of dispatch count changes: iree-org/llvm-project@7ef83f5 Especially because the changes are being observed in data tiling enabled benchmarks. cc @Max191 |
@MaheshRavishankar are you blocking the integrate for this or would you look at it in a follow up since Quinn has explained the possible reason for the difference? |
Could you try reverting that locally to see if that is the issue. Then we can decide what to do next |
@MaheshRavishankar PTAL at the benchmark comment now, the bot has edited it and it seems the dispatch number regression is gone with the revert. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Updated to llvm/llvm-project@27ac46e6bea2 * Used LLVM `MathExtras.h` to replace MLIR one * Updated `applySignatureConversion` usage Updated to openxla/stablehlo@dd48ec5 * `chlo.minimum_broadcast_shapes` op was removed openxla/stablehlo#2287 * `chlo.dynamic_reshape` op was removed openxla/stablehlo#2286 * Added batching dims to scatter dims openxla/stablehlo#2259 Updated to llvm/torch-mlir@77d7f64 --------- Co-authored-by: hanhanW <hanhan0912@gmail.com> Co-authored-by: Rob Suderman <rob.suderman@gmail.com> Co-authored-by: Quinn Dawkins <quinn@nod-labs.com> Co-authored-by: Nirvedh Meshram <nirvedh@gmail.com> Signed-off-by: Lubo Litchev <lubol@google.com>
@@ -464,6 +464,7 @@ def find_git_submodule_revision(submodule_path): | |||
install_requires=[ | |||
"numpy", | |||
"PyYAML", | |||
"sympy", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this new dep is for the fx_importer
in torch-mlir. That should have at least been documented in the PR description.
This should probably go in extras_require
for a iree-compiler[torch]
package, not the general requirements here. The dep seems relatively small, but I want to keep the core project's dependencies minimal.
Updated to llvm/llvm-project@27ac46e6bea2
MathExtras.h
to replace MLIR oneapplySignatureConversion
usageUpdated to openxla/stablehlo@dd48ec5
chlo.minimum_broadcast_shapes
op was removedDelete chlo.minimum_broadcast_shapes op openxla/stablehlo#2287
chlo.dynamic_reshape
op was removedRemove unused chlo.dynamic_reshape and downstream ops openxla/stablehlo#2286
[stablehlo] Add batching dims to
stablehlo.gather
andstablehlo.scatter
openxla/stablehlo#2259Updated to llvm/torch-mlir@77d7f64