New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable parallel rustc front end in nightly builds #117435
Conversation
This PR modifies |
This PR changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm gonna let people chime in on this for a day or two. If I don't approve by Thursday myself, r=me
Do we need to land #115220 prior to this PR? Or the fact that it applies to more than a single thread is enough not to (in my mind people are immediately going to try >1 threads). |
More than one thread is deadlock prone and anyone opting into it with the -Z flag has to accept that I think |
@bors rollup=never |
enable parallel rustc front end in nightly builds Refers to the [MCP](rust-lang/compiler-team#681), this pr does: 1. Enable the parallel front end in nightly builds, and keep the default number of threads as 1. Then users can use the parallel rustc front end via -Z threads=n option. 2. Set it up to serial front end for beta/stable builds via bootstrap. 3. Switch over the alt builders from parallel rustc to serial, so we have artifacts without parallel to test against the artifacts with parallel. r? `@oli-obk` cc `@cjgillot` `@nnethercote` `@bjorn3` `@Kobzol`
This comment was marked as resolved.
This comment was marked as resolved.
r? @davidtwco @bors r=oli-obk,davidtwco |
☀️ Test successful - checks-actions |
Finished benchmarking commit (f9b6446): comparison URL. Overall result: ❌ regressions - ACTION NEEDEDNext Steps: If you can justify the regressions found in this perf run, please indicate this with @rustbot label: +perf-regression Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 636.284s -> 661.435s (3.95%) |
89: Automated pull from upstream `master` r=Dajamante a=github-actions[bot] This PR pulls the following changes from the upstream repository: * rust-lang/rust#117006 * rust-lang/rust#117511 * rust-lang/rust#117641 * rust-lang/rust#117637 * rust-lang/rust#117631 * rust-lang/rust#117516 * rust-lang/rust#117190 * rust-lang/rust#117292 * rust-lang/rust#117603 * rust-lang/rust#116988 * rust-lang/rust#117630 * rust-lang/rust#117615 * rust-lang/rust#117613 * rust-lang/rust#117592 * rust-lang/rust#117578 * rust-lang/rust#117435 * rust-lang/rust#117607 90: bump serde and serde_derive r=tshepang a=Dajamante Trying to get around the failure seen in #86 Co-authored-by: Ralf Jung <post@ralfj.de> Co-authored-by: Esteban Küber <esteban@kuber.com.ar> Co-authored-by: SparrowLii <liyuan179@huawei.com> Co-authored-by: Matthias Krüger <matthias.krueger@famsik.de> Co-authored-by: Gurinder Singh <frederick.the.fool@gmail.com> Co-authored-by: Michael Goulet <michael@errs.io> Co-authored-by: Thom Chiovoloni <thom@shift.click> Co-authored-by: klensy <klensy@users.noreply.github.com> Co-authored-by: Jack Huey <31162821+jackh726@users.noreply.github.com> Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com> Co-authored-by: hkalbasi <hamidrezakalbasi@protonmail.com> Co-authored-by: bors <bors@rust-lang.org> Co-authored-by: Sven Marnach <sven@mozilla.com> Co-authored-by: Rémy Rakic <remy.rakic+github@gmail.com> Co-authored-by: aissata <aimaiga2@gmail.com>
|
@pnkfelix Sorry for not explaining in the MCP. We decided in the working group meeting early this year that wall-time is a better choice to measuring the perf result of parallel front end. Therefore, the measurement we choose is wall-time+non-relevent. (You may also need to exclude secondary Categories). In this case, the regression of #117435 should be 1.65% |
@rustbot label: +perf-regression-triaged |
Thanks! |
@@ -77,7 +77,7 @@ const LLD_FILE_NAMES: &[&str] = &["ld.lld", "ld64.lld", "lld-link", "wasm-ld"]; | |||
/// | |||
/// If you make any major changes (such as adding new values or changing default values), please | |||
/// ensure that the associated PR ID is added to the end of this list. | |||
pub const CONFIG_CHANGE_HISTORY: &[usize] = &[115898, 116998]; | |||
pub const CONFIG_CHANGE_HISTORY: &[usize] = &[115898, 116998, 117435]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you do this, please explain in the PR description what we have to do to update our configs. I was sent here by bootstrap ("WARNING: there have been changes to x.py since you last updated") and now I have no idea what I'm supposed to do.
Refers to the MCP, this pr does:
Enable the parallel front end in nightly builds, and keep the default number of threads as 1. Then users can use the parallel rustc front end via -Z threads=n option.
Set it up to serial front end for beta/stable builds via bootstrap.
Switch over the alt builders from parallel rustc to serial, so we have artifacts without parallel to test against the artifacts with parallel.
r? @oli-obk
cc @cjgillot @nnethercote @bjorn3 @Kobzol