Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Massive performance regression between nightly-2022-08-12 and nightly-2022-08-13 #102952

Open
Robbepop opened this issue Oct 12, 2022 · 18 comments
Open
Labels
A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. C-bug Category: This is a bug. E-needs-mcve Call for participation: This issue has a repro, but needs a Minimal Complete and Verifiable Example I-slow Issue: Problems and improvements with respect to performance of generated code. P-medium Medium priority regression-from-stable-to-stable Performance or correctness regression from one stable version to another. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@Robbepop
Copy link
Contributor

Robbepop commented Oct 12, 2022

Usually I develop on the stable channel and wanted to see how my project performs on the upcoming beta or nightly channels. I saw performance regressions between 30-80% across the board in native and Wasm targets. Those regressions were confirmed by our benchmarking CI as can be seen by the link.

Was it the LLVM 15 Update?

I conducted a bisect and found that the change happened between nightly-2022-08-12 and nightly-2022-08-13.
After short research I saw that Rust updated from LLVM 14 to 15 in exactly this time period: #99464 Other merged commits in this time period were not as suspicious to me.

Past Regressions

Also this unfortunately is not the first time we saw such massive regressions ....
It is extremely hard to craft a minimal code snippet out of wasmi since it is a very heavily optimized bunch of code with lots of interdependencies.
Unfortunately the wasmi project is incredibly performance critical to us. Even 10-15% performance regression are a disaster to us let alone those 30-80% we just saw ...

Hint for Minimal Code Example

I have one major suspicion: Due to missing guaranteed tail calls in Rust we are heavily reliant on a non-guaranteed optimization for our loop-switch based interpreter hot path that pulls jumps to the match arms which results to very similar code as what threaded-code interpreter code would produce. The code that depends on this particular optimization can be found here.
This suspicion is underlined by the fact that especially non call-intense workloads show most regressions in the linked benchmarks. This implies to me that the regressions have something to do with instruction dispatch.

Potential Future Solutions

  • The Rust compiler could add a few benchmarks concerning those loop-switch optimizations to its set of benchmarks so that future LLVM updates won't invalidate those optimizations. I am not sure how viable this approach is to the Rust compiler developers though. Also this only works if we find all the fragile parts that cause these regressions.
  • Ideally Rust offered abstractions that allow to develop efficient interpreters in Rust without relying on Rust/LLVM optimizations: for example guaranteed tail calls.

Reproduce

The current stable Rust channel is the following:

stable-x86_64-unknown-linux-gnu (default)
rustc 1.64.0 (a55dd71d5 2022-09-19)

In order to reproduce these benchmarks do the following:

  1. git clone git@github.com:paritytech/wasmi.git
  2. cd wasmi
  3. git checkout 21e12da67a765c8c8b8a62595d2c9d21e1fa2ef6
  4. rustup toolchain install nightly-2022-08-12
  5. rustup toolchain install nightly-2022-08-13
  6. git submodule update --init --recursive
  7. cargo +stable bench --bench benches execute -- --save-baseline stable
  8. cargo +nightly-2022-08-12 bench --bench benches execute -- --baseline stable
  9. cargo +nightly-2022-08-13 bench --bench benches execute -- --baseline stable
@Robbepop Robbepop added C-bug Category: This is a bug. regression-untriaged Untriaged performance or correctness regression. labels Oct 12, 2022
@rustbot rustbot added the I-prioritize Issue: Indicates that prioritization has been requested for this issue. label Oct 12, 2022
@the8472 the8472 added the I-slow Issue: Problems and improvements with respect to performance of generated code. label Oct 12, 2022
@the8472
Copy link
Member

the8472 commented Oct 12, 2022

That commit doesn't compile

error: couldn't read crates/wasmi/benches/wasm/wasm_kernel/res/revcomp-input.txt: No such file or directory (os error 2)
  --> crates/wasmi/benches/benches.rs:18:30
   |
18 | const REVCOMP_INPUT: &[u8] = include_bytes!("wasm/wasm_kernel/res/revcomp-input.txt");
   |                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: this error originates in the macro `include_bytes` (in Nightly builds, run with -Z macro-backtrace for more info)

@apiraino apiraino added the T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. label Oct 13, 2022
@apiraino
Copy link
Contributor

Meanwhile I'll nominate this issue for T-compiler discussion, I think it's a wider topic that benefits from comments of the team. Wg-prioritization discussion on Zulip

@rustbot label i-compiler-nominated

@rustbot rustbot added the I-compiler-nominated The issue / PR has been nominated for discussion during a compiler team meeting. label Oct 13, 2022
@Robbepop
Copy link
Contributor Author

@the8472 I am sorry, I forgot to mention that you need to initialize git submodules before running benchmarks.
Execute

git submodule update --init --recursive

before running the benchmarks. I will update my post above to include this information.

@Robbepop
Copy link
Contributor Author

Robbepop commented Oct 15, 2022

I don't know if this is related but today I removed 4 unnecessary instructions from the wasmi interpreter. My expectation was that the removal shouldn't change performance at all. However, there were massive regressions again. This time I took the cargo-asm tool to analyze the Executor::execute function I linked earlier in this thread just to see massive differences between the master branch and the PR branch:

The diff between master on stable Rust and master on nightly Rust:
https://gist.github.com/Robbepop/88660d17ec1ede77562732bb68670c8c

So it indeed seems to be the culprit of the issue that Rust + LLVM easily fails to properly optimize this function using the threaded-code style branching technique when the stars are misaligned.

Reproduce

The PR that I created today can be found here: wasmi-labs/wasmi#518
In order to make cargo-asm able to display the function I had to insert the following code:

// wasmi/src/engine/executor.rs
pub trait Execute: Sized {
    fn execute_dummy(self) -> Result<CallOutcome, TrapCode>;
}

impl<'ctx, 'engine, 'func> Execute for Executor<'ctx, 'engine, 'func, ()> {
    fn execute_dummy(self) -> Result<CallOutcome, TrapCode> {
        self.execute()
    }
}

And also add

[profile.release]
lto = "fat"
codegen-units = 1

to the Cargo.toml of the workspace since cargo-asm does not yet support the --profile argument and we want to optimize with codegen-units=1 and lto="fat".

Install the cargo-asm tool via: cargo install cargo-show-asm.
Run the cargo-asm tool via: cargo asm -p wasmi --lib --release execute_dummy > execute_dummy.asm
We need to pipe it into a file since the output is quite large.

@the8472
Copy link
Member

the8472 commented Oct 15, 2022

This indeed does look like the difference between tail calls vs. dispatching from a loop with computed jumps.

hottest instruction in the fast version. note the jmp %*rax

image

hottest instruction in the slow version. note all the jumps back to 24c0 and the incoming jumps at 24c5:

image

@apiraino
Copy link
Contributor

apiraino commented Oct 22, 2022

WG-prioritization assigning priority (Zulip discussion).

Discussed by T-compiler (notes), a smaller reproducible would probably help:

what tools LLVM has to minimize bug reports. Perhaps there's something that can be leveraged to extract just the IR from the interpreter loop function and dependencies?

As mentioned in the opening comment, a bisection seems to point at the LLVM15 upgrade (#99464), specifically between nightly-2022-08-12 and nightly-2022-08-13.

@rustbot label -I-prioritize +P-high E-needs-mcve -I-compiler-nominated

@rustbot rustbot added E-needs-mcve Call for participation: This issue has a repro, but needs a Minimal Complete and Verifiable Example P-high High priority and removed I-prioritize Issue: Indicates that prioritization has been requested for this issue. I-compiler-nominated The issue / PR has been nominated for discussion during a compiler team meeting. labels Oct 22, 2022
@Robbepop
Copy link
Contributor Author

Robbepop commented Oct 22, 2022

@apiraino Thanks a lot for the update and bringing this issue up at the T-compiler meeting!

I am working on a MCVE, however, as stated above, I am unsure whether an MCVE is actually that useful since a fundamental problem that the code in question is extremely sensitive to changes as brought up in this comment were I demonstrated a significant performance regression by removing 4 variants from an enum.

I will update this thread once I am done with (or gave up on) the MCVE.

Summary of the underlying Problems

Please take what follows with a big grain of salt since I am no compiler or LLVM expert. 😇

  1. The virtual machine wasmi that I am working on suffers from severe performance regressions since it relies on non-guaranteed optimizations done by Rust and LLVM for loop-switch constructs.
  2. Users of wasmi critically depend on its performance and also on the reliability of its performance. To those users this virtual machine acts as the system. Therefore wasmi can be seen as "system level sotware" in this context.
  3. Changing even small details about the loop-switch constructs easily changes the whole codegen which might change the performance profile dramatically. In a comment above I demonstrated this effect by removing 4 variants of a large enum.
  4. Even if we were to add regression performance tests to the Rust performance test suite we had no guarantee that those tests would cover all of the potential pitfalls since those optimizations and their heuristics are not guaranteed at any abstraction level. Therefore, any MCVE that might work today to prevent this regression simply might stop working tomorrow without warning.
  5. Therefore, if we want to fix this problem fundamentally we need a way to provide Rust users with more fine grained control over how those loop-switch constructs are going to be interpreted by optimizers OR introduce new abstractions to Rust that provide us with similar level of control over our control flow.

One Potential Solution

What follows now might be somewhat controversial ...

Given the summary above it seems to me that Rust lacks abstractions that allow us to take precise and explicit control over control flow constructs that are required to meet performance criteria for our system level software. Having proper guaranteed tail call abstractions in Rust is one possible solution to this problem and would completely eliminate this issue since it would provide us with sufficient amount of control over the control flow constructs.

  • In the past this was brought up by the author of the Wasm3 interpreter, which claims to be the fastest WebAssembly interpreter and is written in C, when asked why it was not written in Rust. They are using threaded-code dispatch.
  • Other users tried to emulate performance of Wasm3 using safe Rust abstractions. While this approach is really interesting it does not even come close to Wasm3 performance unfortunately and has other major downsides: https://github.com/Neopallium/s1vm
  • In fact guaranteed tail calls have been mentioned quite often in the context of Rust already: https://www.reddit.com/r/rust/comments/my6k5i/are_we_finally_about_to_gain_guaranteed_tail/

I think the general misunderstanding of guaranteed tail calls in Rust is that people think they are just more elegant than other solutions but in reality they provide us with more precise and explicit control over our control flow constructs than what is currently possible with Rust which results in more optimal code and optimizations while using safe Rust.

@Robbepop
Copy link
Contributor Author

Robbepop commented Oct 23, 2022

WIP MCVE: https://github.com/Robbepop/rustc-regression-102952/blob/main/src/lib.rs

This is not yet a real MCVE but we can use it as a starting point and it also has inferior performance on nightly compared to stable. I am not entirely certain that it covers the same issues as wasmi does.

Benchmark using

cargo test --release benchmark -- --nocapture

The outputs I receive are:

  • stable: time: 363.410688ms
  • nightly: time: 431.440033ms

Which demonstrates a moderate 18% slowdown. Since wasmi slowdown was up to 80% I bet that we can do better with the MCVE ... Also if we use the same profile as wasmi (codegen-units = 1, lto = "fat") we see that both stable and nightly have similar performance. So this MCVE is either not a fit or it needs improvement. Also it is not minimal atm.

@nikic nikic added the A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. label Oct 23, 2022
@Robbepop
Copy link
Contributor Author

Robbepop commented Oct 24, 2022

One thing I just noticed when comparing master.asm and pr.asm as linked in this posting is the occurrence of .p2align throughout the files. On master.asm we have 4 of those in total whereas there are countless in the pr.asm. May this be part of the overall problem or at least be a signal to the root cause of the performance regression seen in wasmi-labs/wasmi#518?

The PR in question simply removes 4 variants of an enum.

It might very well be that this regression is not even connected to the initial regression I reported in this issue and demonstrates yet another performance pit fall that we should maybe have a separate issue for. I would be thankful if someone could answer this.

@pacak
Copy link
Contributor

pacak commented Oct 27, 2022

to the Cargo.toml of the workspace since cargo-asm does not yet support the --profile argument and we want to optimize with codegen-units=1 and lto="fat".

It does now. Also feel free to make a ticket if something is missing.

@Robbepop
Copy link
Contributor Author

to the Cargo.toml of the workspace since cargo-asm does not yet support the --profile argument and we want to optimize with codegen-units=1 and lto="fat".

It does now. Also feel free to make a ticket if something is missing.

Hi @pacak Thanks for implementing the feature on cargo-asm. But look again who created the ticket back then: pacak/cargo-show-asm#63 👯

@pacak
Copy link
Contributor

pacak commented Oct 27, 2022

But look again who created the ticket back then:

Right, that's mostly for other people who might be following the steps :)

@Robbepop
Copy link
Contributor Author

Robbepop commented Feb 13, 2023

Any news on this?

I have the strong feeling that I encountered this performance regression bug today again in this wasmi PR:
wasmi-labs/wasmi#676

The PR does not change the amount of instructions but merely changes the implementation of a handful of the over hundred instructions that are part of the big match expression. However, this unexpectedly leads to massive performance regressions of up to 200%. Note that also benchmarks are affected that do not execute the changed instructions.

I used cargo-show-asm to display the differences between master branch and the PR:

edit: I was able to fix the performance regression. The idea is that I thought that it was maybe important for the optimizer that all match arms end in the same set of instructions. This is the commit: wasmi-labs/wasmi@325bdf1 Note that this commit doesn't change semantics, it simply moves an terminator instruction from a closure into the enclosing scope.

^ I will keep this as a reminder to myself for future regressions.

@workingjubilee workingjubilee added regression-from-stable-to-stable Performance or correctness regression from one stable version to another. and removed regression-untriaged Untriaged performance or correctness regression. labels Mar 3, 2023
@workingjubilee
Copy link
Contributor

@Robbepop Regarding the "one potential solution" header you mentioned a while back. Rust for some time has reserved the word "become" for exactly this reason: it has been imagined that it will be used for explicit tail calls.

So I believe the thing you are talking about is not really controversial. It is mostly awaiting someone assembling a draft RFC, thinking through all the implications, presenting it to the community (and also especially T-lang), and implementing it.

Mind, the same person need not accomplish all of these steps, by far, not even "thinking through all the implications" alone. Indeed, if one were to make this happen, they probably would be best off starting by talking to the people working on MIR opts. If explicit tail calls is something that can be done before code hits LLVM, that simplifies a lot of things (though maybe it makes other things more complex, that sort of thing happens).

@Robbepop
Copy link
Contributor Author

Robbepop commented Mar 5, 2023

@workingjubilee Thank you for your response!

I think I see where you are heading with your reply ...
The guaranteed tail call proposal has a very long history in Rust. The first mention that I am aware of is this issue by Graydon Hoare himself in year 2011: #217
It turned out back then that the ecosystem around Rust was not yet ready, especially LLVM.

Since then there have been many proposals to add tail calls to Rust, among others:

A small summary of the issues: rust-lang/rfcs#1888 (comment)

Following the endless discussions in all those threads I never felt that this feature in particular received a lot of support from the core Rust team. Technical reasons for this usually were open questions about borrow/drops that has been resolved by rust-lang/rfcs#2691 (comment) but never received a proper response to follow-up as well as a major open question about the calling ABI that needs to be adjusted for tail calls from what I understood.
Furthermore tail calls frequently were incorrectly perceived as an "elegant" feature for language enthusiasts oftentimes ignoring the fact that it solves niche problems that cannot be solved with any other language feature available in Rust. Therefore tail call proposals were usually handled as very low priority feature request.

This gave me personally the feeling that there is missing support from the group of people from which a potential external contributor urgently needs support. Writing yet another pre-RFC, a third base implementation for rustc or another feature proposal issue didn't seem like a good idea to me concerning the history of this feature. What is needed is commitment and support by the language team in order for someone like me to step up.

I am very open to ideas.

@pnkfelix
Copy link
Member

Discussed in the T-compiler P-high review

At this point I am interpreting this issue as a request for some form of proper tail call (PTC) support. In particular, I interpret the issue author's comments as saying that they are willing to adapt their code in order to get the tail-call-elimination guarantees they want.

I too want some kind of PTC (I make no secret of my Scheme background). but given the many issues that the project has, I also want to make sure we properly prioritize them. In this case, this issue strikes me as a P-medium feature request, not a P-high regression. Therefore I am downgrading it to P-medium.

@rustbot label: -P-high +P-medium

@rustbot rustbot added P-medium Medium priority and removed P-high High priority labels Jun 30, 2023
@nikic
Copy link
Contributor

nikic commented Jun 30, 2023

FYI there is some ongoing work for implementing fail call support, see #112788 for the tracking issue and rust-lang/rfcs#3407 for recent activity on the RFC.

@Robbepop
Copy link
Contributor Author

@pnkfelix Indeed this issue can be closed once Rust tail calls have been merged as I expect it to fix the underlying issue given it provides the control and stack growth guarantees stated in the current MVP design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-LLVM Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues. C-bug Category: This is a bug. E-needs-mcve Call for participation: This issue has a repro, but needs a Minimal Complete and Verifiable Example I-slow Issue: Problems and improvements with respect to performance of generated code. P-medium Medium priority regression-from-stable-to-stable Performance or correctness regression from one stable version to another. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

8 participants