Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

ENH: show_minimize_options → show_options

show_options is not specific to `minimize`, it also concerns options for
`root` and may be used to any futur wrapper.
  • Loading branch information...
commit 8c37910f8d85128a4c31bcca598d6a1ad637d683 1 parent c7d0410
@dlax dlax authored
View
3  scipy/optimize/__init__.py
@@ -15,7 +15,6 @@
:toctree: generated/
minimize - Unified interface for minimizers of multivariate functions
- show_minimize_options - Show method-specific options for `minimize`
fmin - Nelder-Mead Simplex algorithm
fmin_powell - Powell's (modified) level set method
fmin_cg - Non-linear (Polak-Ribiere) conjugate gradient algorithm
@@ -138,6 +137,8 @@
line_search - Return a step that satisfies the strong Wolfe conditions
check_grad - Check the supplied derivative using finite differences
+ show_options - Show specific options optimization solvers
+
"""
from optimize import *
View
202 scipy/optimize/_minimize.py
@@ -8,7 +8,7 @@
"""
-__all__ = ['minimize', 'minimize_scalar', 'show_minimize_options']
+__all__ = ['minimize', 'minimize_scalar']
from warnings import warn
@@ -99,7 +99,7 @@ def minimize(fun, x0, args=(), method='BFGS', jac=None, hess=None,
Maximum number of iterations to perform.
disp : bool
Set to True to print convergence messages.
- For method-specific options, see `show_minimize_options`.
+ For method-specific options, see `show_options('minimize', method)`.
callback : callable, optional
Called after each iteration, as ``callback(xk)``, where ``xk`` is the
current parameter vector.
@@ -455,201 +455,3 @@ def minimize_scalar(fun, bracket=None, bounds=None, args=(),
else:
raise ValueError('Unknown solver %s' % method)
-
-def show_minimize_options(method=None):
- """Show documentation for additional options of minimize's methods.
-
- These are method-specific options that can be supplied to `minimize` in the
- ``options`` dict.
-
- Parameters
- ----------
- method : str, optional
- If not given, shows all methods. Otherwise, show only the options for
- the specified method. Valid values are: 'BFGS', 'Newton-CG',
- 'Nelder-Mead', 'Powell', 'CG', 'Anneal', 'L-BFGS-B', 'TNC',
- 'COBYLA', 'SLSQP'.
-
- Notes
- -----
- * BFGS options:
- gtol : float
- Gradient norm must be less than `gtol` before successful
- termination.
- norm : float
- Order of norm (Inf is max, -Inf is min).
- eps : float or ndarray
- If `jac` is approximated, use this value for the step size.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
-
- * Nelder-Mead options:
- xtol : float
- Relative error in solution `xopt` acceptable for convergence.
- ftol : float
- Relative error in ``fun(xopt)`` acceptable for convergence.
- maxfev : int
- Maximum number of function evaluations to make.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
-
- * Newton-CG options:
- xtol : float
- Average relative error in solution `xopt` acceptable for
- convergence.
- eps : float or ndarray
- If `jac` is approximated, use this value for the step size.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
-
- * CG options:
- gtol : float
- Gradient norm must be less than `gtol` before successful
- termination.
- norm : float
- Order of norm (Inf is max, -Inf is min).
- eps : float or ndarray
- If `jac` is approximated, use this value for the step size.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
-
- * Powell options:
- xtol : float
- Relative error in solution `xopt` acceptable for convergence.
- ftol : float
- Relative error in ``fun(xopt)`` acceptable for convergence.
- maxfev : int
- Maximum number of function evaluations to make.
- direc : ndarray
- Initial set of direction vectors for the Powell method.
- return_all : bool
- If True, return a list of the solution at each iteration. This is only
- done if `full_output` is True.
-
- * Anneal options:
- schedule : str
- Annealing schedule to use. One of: 'fast', 'cauchy' or
- 'boltzmann'.
- T0 : float
- Initial Temperature (estimated as 1.2 times the largest
- cost-function deviation over random points in the range).
- Tf : float
- Final goal temperature.
- maxfev : int
- Maximum number of function evaluations to make.
- maxaccept : int
- Maximum changes to accept.
- boltzmann : float
- Boltzmann constant in acceptance test (increase for less
- stringent test at each temperature).
- learn_rate : float
- Scale constant for adjusting guesses.
- ftol : float
- Relative error in ``fun(x)`` acceptable for convergence.
- quench, m, n : float
- Parameters to alter fast_sa schedule.
- lower, upper : float or ndarray
- Lower and upper bounds on `x`.
- dwell : int
- The number of times to search the space at each temperature.
-
- * L-BFGS-B options:
- maxcor : int
- The maximum number of variable metric corrections used to
- define the limited memory matrix. (The limited memory BFGS
- method does not store the full hessian but uses this many terms
- in an approximation to it.)
- factr : float
- The iteration stops when ``(f^k -
- f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``, where ``eps``
- is the machine precision, which is automatically generated by
- the code. Typical values for `factr` are: 1e12 for low
- accuracy; 1e7 for moderate accuracy; 10.0 for extremely high
- accuracy.
- pgtol : float
- The iteration will stop when ``max{|proj g_i | i = 1, ..., n}
- <= pgtol`` where ``pg_i`` is the i-th component of the
- projected gradient.
- maxfev : int
- Maximum number of function evaluations.
-
- * TNC options:
- scale : list of floats
- Scaling factors to apply to each variable. If None, the
- factors are up-low for interval bounded variables and
- 1+|x] fo the others. Defaults to None
- offset : float
- Value to substract from each variable. If None, the
- offsets are (up+low)/2 for interval bounded variables
- and x for the others.
- maxCGit : int
- Maximum number of hessian*vector evaluations per main
- iteration. If maxCGit == 0, the direction chosen is
- -gradient if maxCGit < 0, maxCGit is set to
- max(1,min(50,n/2)). Defaults to -1.
- maxfev : int
- Maximum number of function evaluation. if None, `maxfev` is
- set to max(100, 10*len(x0)). Defaults to None.
- eta : float
- Severity of the line search. if < 0 or > 1, set to 0.25.
- Defaults to -1.
- stepmx : float
- Maximum step for the line search. May be increased during
- call. If too small, it will be set to 10.0. Defaults to 0.
- accuracy : float
- Relative precision for finite difference calculations. If
- <= machine_precision, set to sqrt(machine_precision).
- Defaults to 0.
- minfev : float
- Minimum function value estimate. Defaults to 0.
- ftol : float
- Precision goal for the value of f in the stoping criterion.
- If ftol < 0.0, ftol is set to 0.0 defaults to -1.
- xtol : float
- Precision goal for the value of x in the stopping
- criterion (after applying x scaling factors). If xtol <
- 0.0, xtol is set to sqrt(machine_precision). Defaults to
- -1.
- pgtol : float
- Precision goal for the value of the projected gradient in
- the stopping criterion (after applying x scaling factors).
- If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy).
- Setting it to 0.0 is not recommended. Defaults to -1.
- rescale : float
- Scaling factor (in log10) used to trigger f value
- rescaling. If 0, rescale at each iteration. If a large
- value, never rescale. If < 0, rescale is set to 1.3.
-
- * COBYLA options:
- rhobeg : float
- Reasonable initial changes to the variables.
- rhoend : float
- Final accuracy in the optimization (not precisely guaranteed).
- This is a lower bound on the size of the trust region.
- maxfev : int
- Maximum number of function evaluations.
-
- * SLSQP options:
- eps : float
- Step size used for numerical approximation of the jacobian.
- maxiter : int
- Maximum number of iterations.
- """
- if method is None:
- notes_header = "Notes\n -----"
- sections = show_minimize_options.__doc__.split(notes_header)[1:]
- else:
- sections = show_minimize_options.__doc__.split('*')[1:]
- sections = [s.strip() for s in sections]
- sections = [s for s in sections if s.lower().startswith(method.lower())]
-
- print '\n'.join(sections)
-
- return
View
2  scipy/optimize/_root.py
@@ -47,7 +47,7 @@ def root(fun, x0, args=(), method='hybr', jac=None, options=None,
Maximum number of iterations to perform.
disp : bool
Set to True to print convergence messages.
- For method-specific options, see `show_minimize_options`.
+ For method-specific options, see `show_options('root', method)`.
full_output : bool, optional
If True, return optional outputs. Default is False.
callback : function, optional
View
199 scipy/optimize/optimize.py
@@ -18,7 +18,7 @@
__all__ = ['fmin', 'fmin_powell', 'fmin_bfgs', 'fmin_ncg', 'fmin_cg',
'fminbound', 'brent', 'golden', 'bracket', 'rosen', 'rosen_der',
'rosen_hess', 'rosen_hess_prod', 'brute', 'approx_fprime',
- 'line_search', 'check_grad', 'Result']
+ 'line_search', 'check_grad', 'Result', 'show_options']
__docformat__ = "restructuredtext en"
@@ -2265,6 +2265,203 @@ def _scalarfunc(*params):
else:
return xmin
+def show_options(solver, method=None):
+ """Show documentation for additional options of optimization solvers.
+
+ These are method-specific options that can be supplied through the
+ ``options`` dict.
+
+ Parameters
+ ----------
+ solver : str
+ Type of optimization solver. One of {`minimize`, `root`}.
+ method : str, optional
+ If not given, shows all methods of the specified solver. Otherwise,
+ show only the options for the specified method. Valid values
+ corresponds to methods' names of respective solver (e.g. 'BFGS' for
+ 'minimize').
+
+ Notes
+ -----
+
+ ** minimize options
+
+ * BFGS options:
+ gtol : float
+ Gradient norm must be less than `gtol` before successful
+ termination.
+ norm : float
+ Order of norm (Inf is max, -Inf is min).
+ eps : float or ndarray
+ If `jac` is approximated, use this value for the step size.
+
+ * Nelder-Mead options:
+ xtol : float
+ Relative error in solution `xopt` acceptable for convergence.
+ ftol : float
+ Relative error in ``fun(xopt)`` acceptable for convergence.
+ maxfev : int
+ Maximum number of function evaluations to make.
+
+ * Newton-CG options:
+ xtol : float
+ Average relative error in solution `xopt` acceptable for
+ convergence.
+ eps : float or ndarray
+ If `jac` is approximated, use this value for the step size.
+
+ * CG options:
+ gtol : float
+ Gradient norm must be less than `gtol` before successful
+ termination.
+ norm : float
+ Order of norm (Inf is max, -Inf is min).
+ eps : float or ndarray
+ If `jac` is approximated, use this value for the step size.
+
+ * Powell options:
+ xtol : float
+ Relative error in solution `xopt` acceptable for convergence.
+ ftol : float
+ Relative error in ``fun(xopt)`` acceptable for convergence.
+ maxfev : int
+ Maximum number of function evaluations to make.
+ direc : ndarray
+ Initial set of direction vectors for the Powell method.
+
+ * Anneal options:
+ schedule : str
+ Annealing schedule to use. One of: 'fast', 'cauchy' or
+ 'boltzmann'.
+ T0 : float
+ Initial Temperature (estimated as 1.2 times the largest
+ cost-function deviation over random points in the range).
+ Tf : float
+ Final goal temperature.
+ maxfev : int
+ Maximum number of function evaluations to make.
+ maxaccept : int
+ Maximum changes to accept.
+ boltzmann : float
+ Boltzmann constant in acceptance test (increase for less
+ stringent test at each temperature).
+ learn_rate : float
+ Scale constant for adjusting guesses.
+ ftol : float
+ Relative error in ``fun(x)`` acceptable for convergence.
+ quench, m, n : float
+ Parameters to alter fast_sa schedule.
+ lower, upper : float or ndarray
+ Lower and upper bounds on `x`.
+ dwell : int
+ The number of times to search the space at each temperature.
+
+ * L-BFGS-B options:
+ maxcor : int
+ The maximum number of variable metric corrections used to
+ define the limited memory matrix. (The limited memory BFGS
+ method does not store the full hessian but uses this many terms
+ in an approximation to it.)
+ factr : float
+ The iteration stops when ``(f^k -
+ f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``, where ``eps``
+ is the machine precision, which is automatically generated by
+ the code. Typical values for `factr` are: 1e12 for low
+ accuracy; 1e7 for moderate accuracy; 10.0 for extremely high
+ accuracy.
+ pgtol : float
+ The iteration will stop when ``max{|proj g_i | i = 1, ..., n}
+ <= pgtol`` where ``pg_i`` is the i-th component of the
+ projected gradient.
+ maxfev : int
+ Maximum number of function evaluations.
+
+ * TNC options:
+ scale : list of floats
+ Scaling factors to apply to each variable. If None, the
+ factors are up-low for interval bounded variables and
+ 1+|x] fo the others. Defaults to None
+ offset : float
+ Value to substract from each variable. If None, the
+ offsets are (up+low)/2 for interval bounded variables
+ and x for the others.
+ maxCGit : int
+ Maximum number of hessian*vector evaluations per main
+ iteration. If maxCGit == 0, the direction chosen is
+ -gradient if maxCGit < 0, maxCGit is set to
+ max(1,min(50,n/2)). Defaults to -1.
+ maxfev : int
+ Maximum number of function evaluation. if None, `maxfev` is
+ set to max(100, 10*len(x0)). Defaults to None.
+ eta : float
+ Severity of the line search. if < 0 or > 1, set to 0.25.
+ Defaults to -1.
+ stepmx : float
+ Maximum step for the line search. May be increased during
+ call. If too small, it will be set to 10.0. Defaults to 0.
+ accuracy : float
+ Relative precision for finite difference calculations. If
+ <= machine_precision, set to sqrt(machine_precision).
+ Defaults to 0.
+ minfev : float
+ Minimum function value estimate. Defaults to 0.
+ ftol : float
+ Precision goal for the value of f in the stoping criterion.
+ If ftol < 0.0, ftol is set to 0.0 defaults to -1.
+ xtol : float
+ Precision goal for the value of x in the stopping
+ criterion (after applying x scaling factors). If xtol <
+ 0.0, xtol is set to sqrt(machine_precision). Defaults to
+ -1.
+ pgtol : float
+ Precision goal for the value of the projected gradient in
+ the stopping criterion (after applying x scaling factors).
+ If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy).
+ Setting it to 0.0 is not recommended. Defaults to -1.
+ rescale : float
+ Scaling factor (in log10) used to trigger f value
+ rescaling. If 0, rescale at each iteration. If a large
+ value, never rescale. If < 0, rescale is set to 1.3.
+
+ * COBYLA options:
+ rhobeg : float
+ Reasonable initial changes to the variables.
+ rhoend : float
+ Final accuracy in the optimization (not precisely guaranteed).
+ This is a lower bound on the size of the trust region.
+ maxfev : int
+ Maximum number of function evaluations.
+
+ * SLSQP options:
+ eps : float
+ Step size used for numerical approximation of the jacobian.
+ maxiter : int
+ Maximum number of iterations.
+
+ ** root options
+
+ """
+ solver = solver.lower()
+ if solver not in ('minimize', 'root'):
+ raise ValueError('Unknown solver.')
+ solver_header = (' ' * 4 + solver + "\n" + ' ' * 4 + '~' * len(solver))
+
+ notes_header = "Notes\n -----"
+ all_doc = show_options.__doc__.split(notes_header)[1:]
+ solvers_doc = [s.strip()
+ for s in show_options.__doc__.split('** ')[1:]]
+ solver_doc = [s for s in solvers_doc
+ if s.lower().startswith(solver)]
+ if method is None:
+ doc = solver_doc
+ else:
+ doc = solver_doc[0].split('* ')[1:]
+ doc = [s.strip() for s in doc]
+ doc = [s for s in doc if s.lower().startswith(method.lower())]
+
+ print '\n'.join(doc)
+
+ return
def main():
import time
Please sign in to comment.
Something went wrong with that request. Please try again.