New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regressions from switching corelib to R2R #22128

Open
AndyAyersMS opened this Issue Jan 22, 2019 · 10 comments

Comments

Projects
None yet
5 participants
@AndyAyersMS
Copy link
Member

AndyAyersMS commented Jan 22, 2019

We have enough perf history now that we can call out a few regressions from #21497, where corelib prejitting was changed from fragile ngen to R2R.

Crypto regressed ~4% on windows (505 -> 525) and ~6% on linux (515 -> 545)

image

Csc/Dataflow regressed ~5% on windows (413 -> 432) and ~5% on linux (450 -> 470, noisy)

image

PerfLab / GetMethod20 regressed ~7% (3.52 -> 3.72). The other GetMethod tests regressed as well.

image

There may be some other cases that turn up -- will add them here.

cc @fadimounir

@jkotas

This comment has been minimized.

Copy link
Member

jkotas commented Jan 22, 2019

@AndyAyersMS Are these numbers with Tiered JITing on - do the compute intensive benchmarks (crypto, GetMethod20) allow for the system to warm up and the Tiered JIT to kick in?

@AndyAyersMS

This comment has been minimized.

Copy link
Member Author

AndyAyersMS commented Jan 22, 2019

I think these all still run with tiering disabled -- @jorive can confirm.

@jkotas

This comment has been minimized.

Copy link
Member

jkotas commented Jan 22, 2019

I would primarily interested in cases where switch to R2R regressed perf and tiered JITing does not recover the regression. Do we have any cases like that? Those would be P1 to look into.

The regressions in compute intensive benchmarks with R2R and tiered JITing disable are P2. We can look at them to see whether it is something easy to fix, but we know that the R2R is not perfect by design. We expect tiered JIT to recover the loss and in places to generate even better code than what fragile NGen for CoreLib ever had.

@jorive

This comment has been minimized.

Copy link
Member

jorive commented Jan 22, 2019

I was looking at the script/configuration on how those results are measured. In this case, the tests run with ful_opt: COMPlus_TieredCompilation=0. It's set here, and specified by the CI script here.
Currently, the performance micro benchmarks are not running with tiering enabled.

@jkotas

This comment has been minimized.

Copy link
Member

jkotas commented Jan 22, 2019

the performance micro benchmarks are not running with tiering enabled

It means that we are not measuring the performance that our customers are going to see in default configuration. Do we have a plan/issue to fix this?

@sergiy-k

This comment has been minimized.

Copy link
Contributor

sergiy-k commented Jan 22, 2019

+1. I was going to ask the same question as @jkotas. Was it intentional to turn Tiered JIT'ing off?
Regardless, I'll ask @fadimounir to take a look when he is back from vacation at least to understand the problem.

@jorive

This comment has been minimized.

Copy link
Member

jorive commented Jan 22, 2019

@jkotas essentially, micro benchmarks run multiple iterations within a loop, and discard the warmup iteration. Thus, most of them do not report tiering information. For tier jitting we use real-world scenarios such as JitBench/MusicStore.
But I agree we should be running the default configuration we are shipping, so feel free to open an issue or send a PR.

@benaadams

This comment has been minimized.

Copy link
Collaborator

benaadams commented Jan 22, 2019

There was a regression in ASP.NET Core db benchmarks around the same time; but could just be coincidence? aspnet/Benchmarks#607

image

Didn't seem to effect the non-Db benchmarks

/cc @sebastienros

@benaadams

This comment has been minimized.

Copy link
Collaborator

benaadams commented Jan 22, 2019

Note ASP.NET Core took a while get a continuous updating runtime flow working, so its also a jump in versions between before and after (27219-3 => 27317-3) 😢

@jkotas

This comment has been minimized.

Copy link
Member

jkotas commented Jan 22, 2019

I agree we should be running the default configuration we are shipping, so feel free to open an issue

#22134

@sergiy-k sergiy-k added this to the 3.0 milestone Jan 31, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment