Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #12

Open
MSeeker1340 opened this issue Oct 21, 2018 · 2 comments
Open

Add benchmarks #12

MSeeker1340 opened this issue Oct 21, 2018 · 2 comments
Assignees

Comments

@MSeeker1340
Copy link
Contributor

From slack:

we should post the benchmarks, but essentially if you want to compute the matrix exponential exp(t*A) for a large t, timestepping it can help improve the accuracy and calculation speed.

IIRC, using timestepping is almost always the right thing to do. @MSeeker1340 we should take the benchmarking code that was in the PRs and formalize those benchmarks so that we can easily point to the numerical accuracy and timings here.

The relevant PR is here: SciML/OrdinaryDiffEq.jl#372.

@ChrisRackauckas what's the standard practice of benchmarking a Julia repo? I'm thinking about adding a benchmark script (notebook?) to test and skip it in regular unit tests. The results will then be recorded in a separate markdown file.

@MSeeker1340 MSeeker1340 self-assigned this Oct 21, 2018
@ChrisRackauckas
Copy link
Member

I think a benchmarks folder is fine. There's not really a standard practice, except for the few packages that can use PkgBenchmark.jl. It might be easiest to just add some notebooks to DiffEqBenchmarks.jl.

(Don't put notebooks in this repo though since then the git history stores every modification, and notebook diffs tend to be the whole thing, so this grows repos really fast.)

@Roger-luo
Copy link
Contributor

Roger-luo commented Feb 28, 2019

This package might be the fastest expmv in Julia world (may fastest among all implementation as well) by now. And I tested Expokit against numpy, Expokit is already faster than scipy. The following is just a random causal benchmark I just had. about 4x faster than Expokit, I'm looking forward to your official benchmark.

And I like the the arnoldi process with option of lanczos is also great, I was going to implement this for Expokit yesterday, but then @ChrisRackauckas told me there is this package!

PS. your guys should write an ANN on discourse (if there's not), let more people know this awesome package!

julia> A = rand(1024, 1024); v = rand(1024);

julia> @benchmark Expokit.expmv(1.0, $A, $v)
BenchmarkTools.Trial:
  memory estimate:  1.71 MiB
  allocs estimate:  261
  --------------
  minimum time:     39.055 ms (0.00% GC)
  median time:      45.978 ms (0.00% GC)
  mean time:        46.702 ms (1.29% GC)
  maximum time:     87.204 ms (49.07% GC)
  --------------
  samples:          108
  evals/sample:     1

julia> @benchmark ExponentialUtilities.expv(1.0, $A, $v)
BenchmarkTools.Trial:
  memory estimate:  469.39 KiB
  allocs estimate:  1027
  --------------
  minimum time:     9.401 ms (0.00% GC)
  median time:      11.130 ms (0.00% GC)
  mean time:        11.297 ms (1.29% GC)
  maximum time:     51.464 ms (80.93% GC)
  --------------
  samples:          443
  evals/sample:     1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants