Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAINT: Benchmarks for optimize.leastsq #4906

Merged
merged 2 commits into from
May 30, 2015
Merged

Conversation

nmayorov
Copy link
Contributor

The general idea is to add benchmarks for the current and future nonlinear least squares optimization solvers. But now we only have optimize.leastsq and the shape of the future methods is not know yet.

I kept benchmarking class very simple and without strong considerations of future. But nevertheless the added problems will suit well for the future methods too, and the benchmarking class is easy to modify (and it's not public). So I decided to show this PR early to get a feedback.

I added almost all LSQ problems from MINPACK-2 problems set. They are rather small unconstrained problems, but perfectly fine for basic benchmarking. I'm planning to expand the problem set as I progress during GSoC.

Here is how the report looks like:

[  0.00%] ·· Building for /Users/nmayorov/anaconda/bin/python
[  0.00%] ·· Benchmarking /Users/nmayorov/anaconda/bin/python
[100.00%] ··· Running optimize.BenchLeastSquares.track_all                                                                            ok
[100.00%] ···· 
               ====================== ======================= ====== =========
               --                                   result type               
               ---------------------- ----------------------------------------
                      problem               average time       nfev   success 
               ====================== ======================= ====== =========
                 AlphaPineneDirect       0.0959238052368164     22       1    
                ChebyshevQuadrature     0.08988590240478515     41       1    
                  CoatingThickness      0.020872282981872558    6        1    
                   EnzymeReaction      0.0005537033081054688    11       1    
                 ExponentialFitting    0.0006913900375366211    18       1    
                  GaussianFitting      0.0015572071075439452    16       1    
                ThermistorResistance    0.005420684814453125   216       1    
               ====================== ======================= ====== =========

@rgommers
Copy link
Member

This looks promising. Just a note about gh-4191, which contains lots of global optimization benchmarks. Doesn't overlap with this, just wanted to point out related work in progress.

@rgommers rgommers added the enhancement A new feature or improvement label May 25, 2015
for _ in range(n_runs):
leastsq(problem.fun, problem.x0, Dfun=problem.jac, ftol=ftol,
full_output=True)
return (time.time() - t0) / n_runs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably OK, but I cannot help but notice that ASV selects the number of runs with a more sophisticated way:
https://github.com/spacetelescope/asv/blob/master/asv/benchmark.py#L451

@pv what is the recommended way of making multiple measurements on a benchmark?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ev-br: not possible to do that with asv currently, so hard to do better than the above. This is on the asv todo list, however. The old optimizer benchmarks also have this problem, see above in the code, so we'll refactor once it becomes possible to do.

@ev-br
Copy link
Member

ev-br commented May 30, 2015

Thanks Nikolay, Pauli.

ev-br added a commit that referenced this pull request May 30, 2015
MAINT: Benchmarks for optimize.leastsq
@ev-br ev-br merged commit adfb3ae into scipy:master May 30, 2015
@ev-br ev-br added this to the 0.17.0 milestone May 30, 2015
@nmayorov nmayorov deleted the lsq_benchmarks branch June 2, 2015 07:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.optimize
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants