Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regression in inference performance caused by CodeInstance refactor #53459

Closed
vtjnash opened this issue Feb 24, 2024 · 5 comments
Closed

regression in inference performance caused by CodeInstance refactor #53459

vtjnash opened this issue Feb 24, 2024 · 5 comments
Milestone

Comments

@vtjnash
Copy link
Sponsor Member

vtjnash commented Feb 24, 2024

It looks like #53219 causes some fairly extreme performance issues in inference (up to 50x longer inference times), though curiously also sometimes provides up to a 5x speed up

Originally posted by @vtjnash in #53219 (comment)

@oscardssmith
Copy link
Member

Do we have a MWE for the regression?

@vtjnash
Copy link
Sponsor Member Author

vtjnash commented Feb 24, 2024

Yes, sorry, I should have pointed out that this is reported by BaseBenchmarks for this commit: https://github.com/JuliaCI/NanosoldierReports/blob/master/benchmark/by_hash/1c25d93_vs_c0a93f8/report.md

@KristofferC KristofferC added this to the 1.12 milestone Feb 25, 2024
vtjnash referenced this issue Feb 26, 2024
When we use options like code coverage, we can't use the native code
present in the cache file since it is not instrumented.

PR #52123 introduced the capability of skipping the native
code during loading, but created the issue that subsequent packages
could have an explicit or implicit dependency on the native code.

PR #53439 tainted the current process by setting
`use_sysimage_native_code`, but this flag is propagated to subprocesses
and lead to a regression in test time.

Move this to a process local flag to avoid the regression.
In the future we might be able to change the calling convention for
cross-image calls to `invoke(ci::CodeInstance, args...)` instead of
`ci.fptr(args...)` to handle native code not being present.

---------

Co-authored-by: Jameson Nash <vtjnash@gmail.com>
@Zentrik
Copy link
Contributor

Zentrik commented Mar 6, 2024

FWIW, it seems like a lot of the regression has been fixed, but there's still a fairly large regression in the abstract interpretation benchmarks (in the daily benchmarks at least):

Summary

Range Mean Count
Regressions 1.97%, 855.32% 119.18% 12
Improvements - 0.00% 0
All 1.97%, 855.32% 119.18% 12

Abstract Interpretation Benchmarks

Benchmark % Change
inference.abstract interpretation.Base.init_stdio(::Ptr{Cvoid}) 855.32%
inference.abstract interpretation.REPL.REPLCompletions.completions 226.32%
inference.abstract interpretation.broadcasting 103.55%
inference.abstract interpretation.println(::QuoteNode) 65.12%
inference.abstract interpretation.rand(Float64) 49.64%
inference.abstract interpretation.sin(42) 45.25%
inference.abstract interpretation.many_invoke_calls 31.07%
inference.abstract interpretation.many_method_matches 21.72%
inference.abstract interpretation.many_global_refs 20.46%
inference.abstract interpretation.many_opaque_closures 7.69%
inference.abstract interpretation.many_const_calls 2.08%
inference.abstract interpretation.many_local_vars 1.97%

Per http://tealquaternion.camdvr.org/compare.html?stat=min-wall-time&name=inference.abst&nonRelevant=true

@Zentrik
Copy link
Contributor

Zentrik commented Mar 11, 2024

I bisected a 6x regression in the min run time of BaseBenchmarks.SUITE[["inference", "allinference", "Base.init_stdio(::Ptr{Cvoid})"]] to ea1a0d2, i.e. #53326.

I bisected a 5x regression in the min run time of BaseBenchmarks.SUITE[["inference", "abstract interpretation", "broadcasting"]] to 144f58b, i.e. #53580

@vtjnash
Copy link
Sponsor Member Author

vtjnash commented Mar 11, 2024

Okay, the init_stdio regression is probably fine then, since we just significantly increased the amount of code visible to the compiler, but didn't change the compiler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants