Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark alternatives to j.u.HashMap #7

Closed
xeno-by opened this issue Nov 16, 2017 · 2 comments
Closed

Benchmark alternatives to j.u.HashMap #7

xeno-by opened this issue Nov 16, 2017 · 2 comments

Comments

@xeno-by
Copy link
Contributor

xeno-by commented Nov 16, 2017

Both Scalac and Dotty use hand-written hashtables to represent scopes. Moreover, Kentucky Mule also uses hand-written hashtables, highlighting the fact that they perform better than j.u.HashMap. Let's see how alternative representations for scopes will affect our performance.

@xeno-by xeno-by added this to the 0.1.0 "Bare minimum" milestone Nov 16, 2017
@jvican
Copy link

jvican commented Nov 16, 2017

I don't know until which point this may be of interest, but from the kentuckymule notes:

I started with representing a Symbol as wrapper around a mutable HashMap[Name, ArrayBuffer[Symbol]] that holds stores its children. Benchmarking showed that symbol table can be populated with symbols from Typer.scala the rate of 85k full symbol table creations per second. Switching from Scala's mutable Map to Guava's MultiMap improved the performance from 85k to 125k per second.

Later work on fixing bugs decreased the performance to 113k ops/s in BenchmarkEnter.enter (using the tree from parsing Typer.scala). Switching from a combo of MultiMap and ArrayBuffer to Scope (borrowed almost verbatim from dotty) increased performance to 348k ops/s. This is an astonishing performance gain and shows the power of a specialized, highly optimized data structure.

@xeno-by
Copy link
Contributor Author

xeno-by commented Dec 16, 2017

After doing preliminary experimentation with the "Benchmark architectural change XXX" tickets, we found out that performance effects of many of these changes lie within the range of run-to-run variance of our current benchmark suite. This makes it hard to form an informed judgement about these changes, so we decided to postpone further experiments until we implement support for more language features and make our benchmark suite more diverse.

@xeno-by xeno-by removed this from the 0.1.0 "Bare minimum" milestone Dec 16, 2017
@xeno-by xeno-by closed this as completed Mar 24, 2018
@xeno-by xeno-by added the Outline label Jul 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

2 participants