Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark Cell::replace() as #[inline] #102548

Merged
merged 1 commit into from Oct 2, 2022
Merged

Conversation

nikic
Copy link
Contributor

@nikic nikic commented Oct 1, 2022

Giving this a try based on #102539 (comment).

@rustbot rustbot added the T-libs Relevant to the library team, which will review and decide on the PR/issue. label Oct 1, 2022
@rustbot
Copy link
Collaborator

rustbot commented Oct 1, 2022

Hey! It looks like you've submitted a new PR for the library teams!

If this PR contains changes to any rust-lang/rust public library APIs then please comment with @rustbot label +T-libs-api -T-libs to tag it appropriately. If this PR contains changes to any unstable APIs please edit the PR description to add a link to the relevant API Change Proposal or create one if you haven't already. If you're unsure where your change falls no worries, just leave it as is and the reviewer will take a look and make a decision to forward on if necessary.

Examples of T-libs-api changes:

  • Stabilizing library features
  • Introducing insta-stable changes such as new implementations of existing stable traits on existing stable types
  • Introducing new or changing existing unstable library APIs (excluding permanently unstable features / features without a tracking issue)
  • Changing public documentation in ways that create new stability guarantees
  • Changing observable runtime behavior of library APIs

@rust-highfive
Copy link
Collaborator

r? @m-ou-se

(rust-highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Oct 1, 2022
@nikic
Copy link
Contributor Author

nikic commented Oct 1, 2022

@bors try @rust-timer queue

@rust-timer
Copy link
Collaborator

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Oct 1, 2022
@bors
Copy link
Contributor

bors commented Oct 1, 2022

⌛ Trying commit 49eaa0f with merge fb20a3c06e762760ff6f838384c1a2803042e6c2...

@bors
Copy link
Contributor

bors commented Oct 1, 2022

☀️ Try build successful - checks-actions
Build commit: fb20a3c06e762760ff6f838384c1a2803042e6c2 (fb20a3c06e762760ff6f838384c1a2803042e6c2)

@rust-timer
Copy link
Collaborator

Queued fb20a3c06e762760ff6f838384c1a2803042e6c2 with parent 744e397, future comparison URL.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (fb20a3c06e762760ff6f838384c1a2803042e6c2): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: +S-waiting-on-review -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
1.1% [0.4%, 1.8%] 2
Regressions ❌
(secondary)
1.0% [0.4%, 1.4%] 7
Improvements ✅
(primary)
-0.3% [-1.4%, -0.2%] 37
Improvements ✅
(secondary)
-0.5% [-2.0%, -0.2%] 25
All ❌✅ (primary) -0.2% [-1.4%, 1.8%] 39

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
6.1% [2.0%, 14.2%] 5
Regressions ❌
(secondary)
2.7% [2.2%, 3.3%] 6
Improvements ✅
(primary)
-9.0% [-18.4%, -4.1%] 19
Improvements ✅
(secondary)
-3.0% [-3.1%, -2.9%] 2
All ❌✅ (primary) -5.9% [-18.4%, 14.2%] 24

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
5.6% [4.9%, 6.3%] 6
Improvements ✅
(primary)
-6.2% [-29.6%, -1.5%] 224
Improvements ✅
(secondary)
-4.1% [-15.1%, -1.3%] 141
All ❌✅ (primary) -6.2% [-29.6%, -1.5%] 224

Footnotes

  1. the arithmetic mean of the percent change 2 3

  2. number of relevant changes 2 3

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Oct 1, 2022
@Kobzol
Copy link
Contributor

Kobzol commented Oct 1, 2022

Note that the configuration of the perf. CI machine was changed just before this perf. run (temporarily for an experiment). So the cycle/wall-time/bootstrap results are not very indicative, until the first PR gets merged into master and perf. benchmarked with the new configuration. Instructions should be OK I think.

@scottmcm
Copy link
Member

scottmcm commented Oct 1, 2022

This is reasonable for how we've been doing inlines in libs, and there's nothing scary to me about those instruction results (clearly more good than bad), so

r? @scottmcm
@bors r+ rollup=never

We'll confirm with the after-merge perf run.

@bors
Copy link
Contributor

bors commented Oct 1, 2022

📌 Commit 49eaa0f has been approved by scottmcm

It is now in the queue for this repository.

@rust-highfive rust-highfive assigned scottmcm and unassigned m-ou-se Oct 1, 2022
@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Oct 1, 2022
@bors
Copy link
Contributor

bors commented Oct 2, 2022

⌛ Testing commit 49eaa0f with merge 756e7be...

@matthiaskrgr
Copy link
Member

Looks like this regresses time rustc needs to compile cargo by up to 25% though? 🤔
Are we sure this is worth it?
https://perf.rust-lang.org/compare.html?start=744e397d8855f7da87d70aa8d0bd9e0f5f0b51a1&end=fb20a3c06e762760ff6f838384c1a2803042e6c2&stat=wall-time

@Kobzol
Copy link
Contributor

Kobzol commented Oct 2, 2022

These numbers are not real, see #102548 (comment).

@bors
Copy link
Contributor

bors commented Oct 2, 2022

☀️ Test successful - checks-actions
Approved by: scottmcm
Pushing 756e7be to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Oct 2, 2022
@bors bors merged commit 756e7be into rust-lang:master Oct 2, 2022
@rustbot rustbot added this to the 1.66.0 milestone Oct 2, 2022
@rust-timer
Copy link
Collaborator

Finished benchmarking commit (756e7be): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Next Steps: If you can justify the regressions found in this perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please open an issue or create a new PR that fixes the regressions, add a comment linking to the newly created issue or PR, and then add the perf-regression-triaged label to this PR.

@rustbot label: +perf-regression
cc @rust-lang/wg-compiler-performance

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
0.9% [0.4%, 1.8%] 3
Regressions ❌
(secondary)
0.9% [0.2%, 1.3%] 10
Improvements ✅
(primary)
-0.3% [-1.2%, -0.2%] 14
Improvements ✅
(secondary)
-0.4% [-1.9%, -0.2%] 12
All ❌✅ (primary) -0.1% [-1.2%, 1.8%] 17

Max RSS (memory usage)

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
2.7% [1.8%, 3.5%] 2
Regressions ❌
(secondary)
5.5% [2.8%, 10.9%] 4
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 2.7% [1.8%, 3.5%] 2

Cycles

Results

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean1 range count2
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.6% [-3.3%, -1.8%] 11
All ❌✅ (primary) - - 0

Footnotes

  1. the arithmetic mean of the percent change 2 3

  2. number of relevant changes 2 3

@pnkfelix
Copy link
Member

pnkfelix commented Oct 5, 2022

This is reasonable for how we've been doing inlines in libs, and there's nothing scary to me about those instruction results (clearly more good than bad), so

r? @scottmcm @bors r+ rollup=never

We'll confirm with the after-merge perf run.

Hey, @scottmcm : it looks to me like the final results were ... still more good than bad, but not quite as good as what was predicted from the earlier run. Namely, the earlier run reported 37 primary improvements, but the end PR only says 14 primary improvements.

Overall, I think it looks like something we still go ahead with. In some ways the biggest decision is whether to be concerned about that 1.8% hit to the build-time for serde_derive, which is a cost you were already saying you were willing to bear before; I just want to make sure you're still willing to bear it for a somewhat lesser win.

@scottmcm
Copy link
Member

scottmcm commented Oct 5, 2022

I'm curious how, for serde_derive, opt-full was +1.76% but opt-incr-full was -0.05% (below significance threshold).

I do still think this makes sense, based on the analysis from #102539 (comment) that inspired this PR in the first place. But it's always hard for me to judge these, so I'm mostly just leaning on "well, it said -0.1% overall" and that the diff is perfectly reasonable, so could plausibly be persuaded otherwise if anyone feels strongly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merged-by-bors This PR was explicitly merged by bors. perf-regression Performance regression. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-libs Relevant to the library team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants