Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] Justify (or tone-down) performance claims in paper and docs #34

Closed
ashiklom opened this issue Apr 17, 2023 · 3 comments
Closed

Comments

@ashiklom
Copy link

The paper and documentation make multiple claims about performance without benchmarks or jusfitication. I am not convinced that simple models --- like the examples here --- are that much faster in their Julia implementations relative to good implementations in Python or R (especially because in practice, a lot of the performance-critical steps in R/Python are taking advantage of highly optimized Fortran/C routines). Some suggestions (don't have to do all of these; just some ideas):

(1) Cite existing Julia performance benchmarks to highlight why some kinds of operations in Julia are as fast or faster than interpreted (and sometimes, compiled!) languages. Besides the relevant engineering/CS literature on Julia, existing scientific programming efforts in Julia --- like the CliMA group --- may have good numbers for this.

(2) Re-frame some sections as, "Modularity / flexibility of Julia's design allows users to take advantage of performance-enhancing libraries like Floops.jl, DifferentialEquations.jl", etc.

(3) Include some performance statistics for simulations. These don't necessarily have to be comparisons --- most people don't care about something being the fastest, just that it's fast enough (especially when you add all the other great functionality that this package provides).

openjournals/joss-reviews#5371

@VEZY
Copy link
Member

VEZY commented Apr 27, 2023

@ashiklom, can you point me to any publication demonstrating performance from the CliMA group ? I found this page but there are so many publications there...

We show a comparison with interpreted languages in the paper we are preparing for PlantBiophysics, but it does not represent the entire field of plant modelling, so it's really relevant here. So I'm preparing a little benchmark with the new toy models I added in the examples fodler.

@ashiklom
Copy link
Author

I think just the simple comparison from the PlantBiophysics paper that you describe would be sufficient. Even better than anything from CLIMA, I think, because you are showing a direct comparison for exactly the computations that this package is doing.

If you will not have those comparisons for some time, it's also fine to just tweak the language to say something like, "Julia can achieve significantly better performance than typical interpreted languages" and cite any studies or websites that have shown this. The performance is not the main selling point of this package, so this doesn't have to be perfect.

@VEZY VEZY closed this as completed in acdfc72 Apr 27, 2023
@VEZY
Copy link
Member

VEZY commented Apr 27, 2023

Thank you for the quick answer ! I propose the changes in acdfc72.
What I did:

  • add a benchmark script so people can verify by themselves quickly, and put the results in the paper
  • add link to benchmark of PlantBiophysics vs plantecophys (R implementation)
  • toned-down some words that you identified: hassle-free, effortlessy...
  • also not related to this issue but I added a citation to the paper Roesch et al. (2023) that nicely outlines the issues that we have in biology and how Julia can be a great tool to tackle them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants