New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #1103

Merged
merged 3 commits into from Apr 5, 2016

Conversation

Projects
None yet
2 participants
@tobiasdiez
Member

tobiasdiez commented Apr 5, 2016

This PR adds some basic benchmarks for parsing and writing a bib file. The results are as following for a database consisting of 1000 entries.
Benchmark Score Error Units
Benchmarks.parse 49736.582 ± 788.879 ops/s
Benchmarks.write 0.706 ± 0.012 ops/s
Benchmarks.search 258.838 ± 5.604 ops/s
Benchmarks.inferBibDatabaseMode 1297.622 ± 22.910 ops/s

As one can see the parse operation is by many orders of magnitudes quicker then writing. I had a closer look at the write operation and it turned out that 66% of the time is spent in Database.getMode(). Some small changes improved the situation by a factor of 10
Benchmark Score Error Units
Benchmarks.parse 42031.971 ± 8188.833 ops/s
Benchmarks.write 8.299 ± 0.304 ops/s
Benchmarks.search 248.093 ± 7.573 ops/s
Benchmarks.inferBibDatabaseMode 20759.711 ± 397.031 ops/s

I suspect the changes in #1100 improve the situation even more (since there the database mode is cached).

(By the way, gradle jmh runs the benchmarks. So its pretty simple to use.)

  • Change in CHANGELOG.md described
  • Tests created for changes
  • Screenshots added (for bigger UI changes)

@tobiasdiez tobiasdiez changed the title from [WIP] Add benchmarks to Add benchmarks Apr 5, 2016

@tobiasdiez

This comment has been minimized.

Show comment
Hide comment
@tobiasdiez

tobiasdiez Apr 5, 2016

Member

Ready for review.

Member

tobiasdiez commented Apr 5, 2016

Ready for review.

import org.openjdk.jmh.runner.RunnerException;
@State(Scope.Thread)
public class Benchmarks {

This comment has been minimized.

@simonharrer

simonharrer Apr 5, 2016

Contributor

can we define any thresholds?

@simonharrer

simonharrer Apr 5, 2016

Contributor

can we define any thresholds?

This comment has been minimized.

@tobiasdiez

tobiasdiez Apr 5, 2016

Member

Not that I'm aware of. In the end JMH only runs the benchmarks and writes a result file.

I found plugins for jenkins and for teamcity but apparently there is nothing similar for circleci or travis.

@tobiasdiez

tobiasdiez Apr 5, 2016

Member

Not that I'm aware of. In the end JMH only runs the benchmarks and writes a result file.

I found plugins for jenkins and for teamcity but apparently there is nothing similar for circleci or travis.

warmupIterations = 5
iterations = 10
fork = 2
}

This comment has been minimized.

@simonharrer

simonharrer Apr 5, 2016

Contributor

Nice configuration in gradle :)

@simonharrer

simonharrer Apr 5, 2016

Contributor

Nice configuration in gradle :)

@simonharrer

This comment has been minimized.

Show comment
Hide comment
@simonharrer

simonharrer Apr 5, 2016

Contributor

Really like this. However, we cannot easily benchmark the GUI performance. But we can benchmark the MainTableDataModel from #1100 which does all the heavy lifting regarding sorting, filtering, etc.

It would be awesome if we could track the progress of these benchmarks, but that would require something like a jenkins. Probably something for the future.

Contributor

simonharrer commented Apr 5, 2016

Really like this. However, we cannot easily benchmark the GUI performance. But we can benchmark the MainTableDataModel from #1100 which does all the heavy lifting regarding sorting, filtering, etc.

It would be awesome if we could track the progress of these benchmarks, but that would require something like a jenkins. Probably something for the future.

@tobiasdiez

This comment has been minimized.

Show comment
Hide comment
@tobiasdiez

tobiasdiez Apr 5, 2016

Member

I changed the code accordingly to your comments.

Yes it would be nice to have an overview of the performance for each PR. Since JMH writes the results to a text file in build\reports\jmh, it shouldn't be to hard to get the numbers.

Member

tobiasdiez commented Apr 5, 2016

I changed the code accordingly to your comments.

Yes it would be nice to have an overview of the performance for each PR. Since JMH writes the results to a text file in build\reports\jmh, it shouldn't be to hard to get the numbers.

@simonharrer

This comment has been minimized.

Show comment
Hide comment
@simonharrer

simonharrer Apr 5, 2016

Contributor

👍 LGTM

Contributor

simonharrer commented Apr 5, 2016

👍 LGTM

@tobiasdiez tobiasdiez merged commit 97e1293 into JabRef:master Apr 5, 2016

3 checks passed

ci/circleci Your tests passed on CircleCI!
Details
codecov/project 23.59% remains the same compared to 93fdb02
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@mlep mlep referenced this pull request Apr 7, 2016

Closed

[Blog's post] A faster JabRef #35

@koppor koppor deleted the tobiasdiez:benchmark branch Apr 26, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment