New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Usability of optimize.shgo
#13813
Comments
The points about |
I definitely agree. I am working on the documentation before pulling the code into scipy. Currently what is eating most of time is that I wrote a new caching for the sampling step which fixed all of the bugs mentioned here and I'm trying to clean and document the code before opening a PR to scipy. In addition I need to update the parallelisation to be consistent with the other routines in scipy and avoid any extra dependency and the finally the issues mentioned in #13469.
This is a bug that was fixed in the upstream repository.
There is a known bug where
Perhaps a small tutorial problem would be worth writing to demonstrate the difference.
It is called in the local minimization routine. The current callback is used only in the
I see, thank you, this is a bug that still needs to be fixed.
Relative tolerance, I will update this. I don't know about having both, I would assume that only relative tolerance is needed unless there are applications where absolute tolerance is required (currently if the solution vector is at 0.0 then absolute tolerance is used, everywhere else relative tolerance is used).
This depends on how many proven basins of attraction was found in a given sampling step or over several iterations without minimization.
For each attractive basin a local minimization should only be done once. There is a bug in the current scipy version where the same basin of attraction could be run in the following iterations. This bug has been fixed with the new caching (this was the main reason for writing the new file which is unfortunately very large right now). In general, there are a large number of options and the docstring is already quite large. But a lot of the confusion pointed out here is due to bugs. I think mainly the usability of |
Often people often like to know how far through the minimisation procedure they are, perhaps using a progress bar like tqdm. If you know the total length of computation (e.g. with If you don't know the total length of computation you can e.g. update a progress bar with minimum cost function found so far. If you get the best x so far in the callback this is relatively easy. If you're not, then you have to cache the best cost function (a few more lines of code) for comparison. Either way, you need to know what |
This is actually interesting and could be shown in the doc. We could even think about having an optional dependency to |
I am currently attempting to solve an optimization problem using |
I've been trying to familiarise myself with the operation of
optimize.shgo
, and have run into several usability issues which limit the ability of a general user to apply it to their problems. A lot of these could be improved by a documentation route.it's not clear how
n
anditers
need to be adjusted to solve a particular problem, or how they should be varied if the dimensionality changes.it's not clear how many function evaluations,
nfev
, that the user will be exposed to when they change the different solver options. How does nfev scale withiters
,n
andsampling_method
? By experimenting it looks like there are a total ofn * iters
function evaluations (plus any local minimizer evaluations) if the sampling method issobol
(this isn't documented), i.e.nfev - nlfev = n * iters
.When the sampling method is changed to
simplicial
the total number of function evaluations appears to be much larger, and have no clear dependence onn
oriters
.When
sampling_method = 'sobol'
, anditers = 1
it still looks like there are two iterations done. This might be a bug:maxev
does not have any bearing on the total number of function evaluations actually used. For one example I tried to setoptions={'maxfev': 100, 'maxev': 100}
, but there were 750 function evaluations used overall. How does one limit the number ofnfev
?callback
actually called? I counted the number of times the callback function was called, and it's not equal toiters
. My expectation is that it would be callediter
times.x
found so far, it varies quite a bit. From my point of view this renders the callback of little use; is there a good reason for this?Callback for minimizer starting at *x*
is printed. There's no way of removing that message, I tried settingoptions['disp']=False
, but to little effect.OptimizeResult
why/how the sampling terminates. There is aminhgrd
andf_tol
, but the reason for termination is not generally given.f_tol
an absolute tolerance or a relative tolerance? Should the minimiser have both?f_tol
is not explained.minimize_every_iter
defaults to False, and the default value oflocal_iter
is not mentioned in the docstring. I would naively expect a total of 1 local minimization ifiters=5
and the default options apply (minimize_every_iter
should beFalse
)The text was updated successfully, but these errors were encountered: