Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for feature: Cross-Versions benchmarking #174

Open
diesalbla opened this issue Oct 19, 2019 · 2 comments
Open

Request for feature: Cross-Versions benchmarking #174

diesalbla opened this issue Oct 19, 2019 · 2 comments

Comments

@diesalbla
Copy link

For most Scala projects, when I submit a Pull Request and I have to evaluate the performance impact of a Pull Request, I often have to manually do the following process:

  • Checkout each distinct version, with and without the changes.
  • Run the sbt:jmh command, a same command, in each of them.
  • Retrieve and store the results from each run.
  • Arrange the results of both runs onto a spreadsheet, which I then use to show the relative error of each measure, and the observed improvement.

It may be desirable if sbt:jmh plugin could offer a command to carry ouy these operations directly, and were able to generate a single summary table, which included the score and error for each commit, and the relative difference between them.

Benchmark  Mode  Cnt Score(master)   Error  Score(branch) Error  Change  Units
Fili       thrpt 10         45000       42         42000     41    93.3 %  B/s
@guizmaii
Copy link
Contributor

Why closing @diesalbla? Is it supported now? Did you find a solution?

@diesalbla
Copy link
Author

@guizmaii This is an issue I opened a few years ago during the contributions with fs2, due to the small inconvenience of switching branches and keeping results. It is not something I am aiming at doing soon, and it did not seem right to burden sbt maintainers with it. Maybe all that all is needed is a shell script, to do all the switch branch, setup, compilation, run benchmarks and save results.

I have not found any solution to do this locally. I understand, though, that there are some CI workflows to run benchmarks on each commit or PR. That may alleviate the need for this.

Leaving open for now.

@diesalbla diesalbla reopened this May 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants