Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track package performance over time #56

Open
jimhester opened this issue Apr 24, 2017 · 7 comments
Open

Track package performance over time #56

jimhester opened this issue Apr 24, 2017 · 7 comments

Comments

@jimhester
Copy link
Contributor

@jimhester jimhester commented Apr 24, 2017

Covr and codecov.io are great for tracking code coverage during package's development. Another aspect of a package that would be useful to track is the performance of one or more benchmark functions.

This is useful for package authors to ensure they don't inadvertently introduce a performance regression when adding new features. Also useful for users to see if how much a new version improves or reduces performance. Could also running the benchmarks when a PR is submitted, to see how the changes impact current performance.

I wrote a rough example at https://github.com/jimhester/benchthat and @krlmlr has dplyr specific code to do this at https://krlmlr.github.io/dplyr.benchmark/.

Some useful features to me would be

  1. Store the results in a easy to parse file in the repository (My draft puts them in /docs/benchmarks)
  2. Helper functions that are easy to run automatically in a package's tests.
  3. Run a benchmark retroactively over the repo history.
  • Is there a peak finding algorithm / git bisect we could use to find performance breakpoints so you don't have to exhaustively benchmark each commit?
  1. Visualizing and reporting on benchmark results.
@noamross

This comment has been minimized.

Copy link

@noamross noamross commented Apr 24, 2017

@jimhester

This comment has been minimized.

Copy link
Contributor Author

@jimhester jimhester commented Apr 24, 2017

Rperform seems like it already does most of this, but clearly needs more exposure / use and possibly some thought into better integration into pkgdown / travis so it is more useful for PR results.

@jennybc

This comment has been minimized.

Copy link
Member

@jennybc jennybc commented Apr 25, 2017

I love this idea!

Question re: outside support:

codecov.io is to code coverage as ??? is to benchmarking

Or does this aspect have to be handled by the package described here? The display of results over time could potentially be handled in pkgdown site.

@jimhester

This comment has been minimized.

Copy link
Contributor Author

@jimhester jimhester commented Apr 25, 2017

I don't know of anything like codecov.io for tracking benchmarking over time. If there was something we could use it or maybe setup a simple service to do so.

@gaborcsardi

This comment has been minimized.

Copy link
Contributor

@gaborcsardi gaborcsardi commented Apr 25, 2017

codecov.io is to code coverage as ??? is to benchmarking

https://github.com/tobami/codespeed is only one I know of. You need to run your own service.

Julia used to run it, I don't know if they still do.

@gaborcsardi

This comment has been minimized.

Copy link
Contributor

@gaborcsardi gaborcsardi commented Apr 25, 2017

The Julia site used to be at http://speed.julialang.org/

It is gone.

@jsta

This comment has been minimized.

Copy link
Contributor

@jsta jsta commented Jun 9, 2017

Seems to me that it would be logical to integrate performance testing with testthat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants
You can’t perform that action at this time.