Skip to content

Commit

Permalink
Merge pull request #383 from reneeotten/brute_force_method
Browse files Browse the repository at this point in the history
[WIP] Add the brute force method.
  • Loading branch information
newville committed Feb 8, 2017
2 parents efd368a + a3be333 commit 22b40fd
Show file tree
Hide file tree
Showing 17 changed files with 1,288 additions and 585 deletions.
4 changes: 2 additions & 2 deletions doc/bounds.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,14 @@ for `MINUIT`_. This is implemented following (and borrowing heavily from)
the `leastsqbound`_ from J. J. Helmus. Parameter values are mapped from
internally used, freely variable values :math:`P_{\rm internal}` to bounded
parameters :math:`P_{\rm bounded}`. When both ``min`` and ``max`` bounds
are specified, the mapping is
are specified, the mapping is:

.. math::
:nowrap:
\begin{eqnarray*}
P_{\rm internal} &=& \arcsin\big(\frac{2 (P_{\rm bounded} - {\rm min})}{({\rm max} - {\rm min})} - 1\big) \\
P_{\rm bounded} &=& {\rm min} + \big(\sin(P_{\rm internal}) + 1\big) \frac{({\rm max} - {\rm min})}{2}
P_{\rm bounded} &=& {\rm min} + \big(\sin(P_{\rm internal}) + 1\big) \frac{({\rm max} - {\rm min})}{2}
\end{eqnarray*}
With only an upper limit ``max`` supplied, but ``min`` left unbounded, the
Expand Down
322 changes: 161 additions & 161 deletions doc/builtin_models.rst

Large diffs are not rendered by default.

26 changes: 13 additions & 13 deletions doc/confidence.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,17 +63,17 @@ starting point::
>>> result = mini.minimize()
>>> print(lmfit.fit_report(result.params))
[Variables]]
a: 0.09943895 +/- 0.000193 (0.19%) (init= 0.1)
b: 1.98476945 +/- 0.012226 (0.62%) (init= 1)
a: 0.09943895 +/- 0.000193 (0.19%) (init= 0.1)
b: 1.98476945 +/- 0.012226 (0.62%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = 0.601
C(a, b) = 0.601

Now it is just a simple function call to calculate the confidence
intervals::

>>> ci = lmfit.conf_interval(mini, result)
>>> lmfit.printfuncs.report_ci(ci)
99.70% 95.00% 67.40% 0.00% 67.40% 95.00% 99.70%
99.70% 95.00% 67.40% 0.00% 67.40% 95.00% 99.70%
a 0.09886 0.09905 0.09925 0.09944 0.09963 0.09982 0.10003
b 1.94751 1.96049 1.97274 1.97741 1.99680 2.00905 2.02203

Expand Down Expand Up @@ -103,16 +103,16 @@ uncertainties and correlations
which will report::

[[Variables]]
a1: 2.98622120 +/- 0.148671 (4.98%) (init= 2.986237)
a2: -4.33526327 +/- 0.115275 (2.66%) (init=-4.335256)
t1: 1.30994233 +/- 0.131211 (10.02%) (init= 1.309932)
t2: 11.8240350 +/- 0.463164 (3.92%) (init= 11.82408)
a1: 2.98622120 +/- 0.148671 (4.98%) (init= 2.986237)
a2: -4.33526327 +/- 0.115275 (2.66%) (init=-4.335256)
t1: 1.30994233 +/- 0.131211 (10.02%) (init= 1.309932)
t2: 11.8240350 +/- 0.463164 (3.92%) (init= 11.82408)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = 0.987
C(a2, t1) = -0.925
C(t1, t2) = -0.881
C(a1, t1) = -0.599
95.00% 68.00% 0.00% 68.00% 95.00%
C(a2, t2) = 0.987
C(a2, t1) = -0.925
C(t1, t2) = -0.881
C(a1, t1) = -0.599
95.00% 68.00% 0.00% 68.00% 95.00%
a1 2.71850 2.84525 2.98622 3.14874 3.34076
a2 -4.63180 -4.46663 -4.33526 -4.22883 -4.14178
t2 10.82699 11.33865 11.82404 12.28195 12.71094
Expand Down
32 changes: 16 additions & 16 deletions doc/constraints.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,27 +7,27 @@ Using Mathematical Constraints
.. _asteval: http://newville.github.io/asteval/

Being able to fix variables to a constant value or place upper and lower
bounds on their values can greatly simplify modeling real data. These
capabilities are key to lmfit's Parameters. In addition, it is sometimes
highly desirable to place mathematical constraints on parameter values.
For example, one might want to require that two Gaussian peaks have the
same width, or have amplitudes that are constrained to add to some value.
Of course, one could rewrite the objective or model function to place such
requirements, but this is somewhat error prone, and limits the flexibility
bounds on their values can greatly simplify modeling real data. These
capabilities are key to lmfit's Parameters. In addition, it is sometimes
highly desirable to place mathematical constraints on parameter values.
For example, one might want to require that two Gaussian peaks have the
same width, or have amplitudes that are constrained to add to some value.
Of course, one could rewrite the objective or model function to place such
requirements, but this is somewhat error prone, and limits the flexibility
so that exploring constraints becomes laborious.

To simplify the setting of constraints, Parameters can be assigned a
mathematical expression of other Parameters, builtin constants, and builtin
mathematical functions that will be used to determine its value. The
expressions used for constraints are evaluated using the `asteval`_ module,
which uses Python syntax, and evaluates the constraint expressions in a safe
To simplify the setting of constraints, Parameters can be assigned a
mathematical expression of other Parameters, builtin constants, and builtin
mathematical functions that will be used to determine its value. The
expressions used for constraints are evaluated using the `asteval`_ module,
which uses Python syntax, and evaluates the constraint expressions in a safe
and isolated namespace.

This approach to mathematical constraints allows one to not have to write a
separate model function for two Gaussians where the two ``sigma`` values are
This approach to mathematical constraints allows one to not have to write a
separate model function for two Gaussians where the two ``sigma`` values are
forced to be equal, or where amplitudes are related. Instead, one can write a
more general two Gaussian model (perhaps using :class:`GaussianModel`) and
impose such constraints on the Parameters for a particular fit.
more general two Gaussian model (perhaps using :class:`GaussianModel`) and
impose such constraints on the Parameters for a particular fit.


Overview
Expand Down
2 changes: 1 addition & 1 deletion doc/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ different arrays. As a bonus, the two lines share the 'offset' parameter::
model1 = params['offset'] + x * params['slope1']
model2 = params['offset'] + x * params['slope2']

resid1 = dat1 - model1
resid1 = dat1 - model1
resid2 = dat2 - model2
return numpy.concatenate((resid1, resid2))

Expand Down
155 changes: 80 additions & 75 deletions doc/fitting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,18 +32,18 @@ Writing a Fitting Function
An important component of a fit is writing a function to be minimized --
the *objective function*. Since this function will be called by other
routines, there are fairly stringent requirements for its call signature
and return value. In principle, your function can be any python callable,
and return value. In principle, your function can be any Python callable,
but it must look like this:

.. function:: func(params, *args, **kws):

calculate objective residual to be minimized from parameters.
Calculate objective residual to be minimized from parameters.

:param params: parameters.
:type params: :class:`Parameters`.
:param args: positional arguments. Must match ``args`` argument to :func:`minimize`
:param kws: keyword arguments. Must match ``kws`` argument to :func:`minimize`
:return: residual array (generally data-model) to be minimized in the least-squares sense.
:param params: Parameters.
:type params: :class:`Parameters`
:param args: Positional arguments. Must match ``args`` argument to :func:`minimize`.
:param kws: Keyword arguments. Must match ``kws`` argument to :func:`minimize`.
:return: Residual array (generally data-model) to be minimized in the least-squares sense.
:rtype: numpy array. The length of this array cannot change between calls.


Expand All @@ -62,30 +62,30 @@ method, effectively doing a least-squares optimization of the return values.

Since the function will be passed in a dictionary of :class:`Parameters`, it is advisable
to unpack these to get numerical values at the top of the function. A
simple way to do this is with :meth:`Parameters.valuesdict`, as with::
simple way to do this is with :meth:`Parameters.valuesdict`, as shown below::


def residual(pars, x, data=None, eps=None):
# unpack parameters:
# extract .value attribute for each parameter
parvals = pars.valuesdict()
period = parvals['period']
shift = parvals['shift']
decay = parvals['decay']
# unpack parameters:
# extract .value attribute for each parameter
parvals = pars.valuesdict()
period = parvals['period']
shift = parvals['shift']
decay = parvals['decay']

if abs(shift) > pi/2:
shift = shift - sign(shift)*pi
if abs(shift) > pi/2:
shift = shift - sign(shift)*pi

if abs(period) < 1.e-10:
period = sign(period)*1.e-10
if abs(period) < 1.e-10:
period = sign(period)*1.e-10

model = parvals['amp'] * sin(shift + x/period) * exp(-x*x*decay*decay)
model = parvals['amp'] * sin(shift + x/period) * exp(-x*x*decay*decay)

if data is None:
return model
if eps is None:
return (model - data)
return (model - data)/eps
if data is None:
return model
if eps is None:
return (model - data)
return (model - data)/eps

In this example, ``x`` is a positional (required) argument, while the
``data`` array is actually optional (so that the function returns the model
Expand All @@ -98,8 +98,8 @@ to use the bounds on the :class:`Parameter` to do this::

but putting this directly in the function with::

if abs(period) < 1.e-10:
period = sign(period)*1.e-10
if abs(period) < 1.e-10:
period = sign(period)*1.e-10

is also a reasonable approach. Similarly, one could place bounds on the
``decay`` parameter to take values only between ``-pi/2`` and ``pi/2``.
Expand Down Expand Up @@ -153,6 +153,8 @@ class as listed in the :ref:`Table of Supported Fitting Methods
| Differential | ``differential_evolution`` |
| Evolution | |
+-----------------------+------------------------------------------------------------------+
| Brute force method | ``brute`` |
+-----------------------+------------------------------------------------------------------+


.. note::
Expand Down Expand Up @@ -194,7 +196,7 @@ well formatted text tables you can execute::

with `results` being a `MinimizerResult` object. Note that the method
:meth:`lmfit.parameter.Parameters.pretty_print` accepts several arguments
for customizing the output (e.g. column width, numeric format, etc.).
for customizing the output (e.g., column width, numeric format, etc.).

.. autoclass:: MinimizerResult

Expand Down Expand Up @@ -246,7 +248,7 @@ After a fit using using the :meth:`leastsq` method has completed
successfully, standard errors for the fitted variables and correlations
between pairs of fitted variables are automatically calculated from the
covariance matrix. The standard error (estimated :math:`1\sigma`
error-bar) go into the :attr:`stderr` attribute of the Parameter. The
error-bar) goes into the :attr:`stderr` attribute of the Parameter. The
correlations with all other variables will be put into the
:attr:`correl` attribute of the Parameter -- a dictionary with keys for
all other Parameters and values of the corresponding correlation.
Expand All @@ -272,8 +274,8 @@ The :class:`MinimizerResult` includes the traditional chi-square and reduced chi
:nowrap:
\begin{eqnarray*}
\chi^2 &=& \sum_i^N r_i^2 \\
\chi^2_\nu &=& = \chi^2 / (N-N_{\rm varys})
\chi^2 &=& \sum_i^N r_i^2 \\
\chi^2_\nu &=& = \chi^2 / (N-N_{\rm varys})
\end{eqnarray*}
where :math:`r` is the residual array returned by the objective function
Expand All @@ -288,7 +290,7 @@ Also included are the `Akaike Information Criterion
held in the ``aic`` and ``bic`` attributes, respectively. These give slightly
different measures of the relative quality for a fit, trying to balance
quality of fit with the number of variable parameters used in the fit.
These are calculated as
These are calculated as:

.. math::
:nowrap:
Expand Down Expand Up @@ -318,17 +320,17 @@ used to abort a fit.

.. function:: iter_cb(params, iter, resid, *args, **kws):

user-supplied function to be run at each iteration
User-supplied function to be run at each iteration.

:param params: parameters.
:param params: Parameters.
:type params: :class:`Parameters`.
:param iter: iteration number
:param iter: Iteration number.
:type iter: integer
:param resid: residual array.
:param resid: Residual array.
:type resid: ndarray
:param args: positional arguments. Must match ``args`` argument to :func:`minimize`
:param kws: keyword arguments. Must match ``kws`` argument to :func:`minimize`
:return: residual array (generally data-model) to be minimized in the least-squares sense.
:param args: Positional arguments. Must match ``args`` argument to :func:`minimize`
:param kws: Keyword arguments. Must match ``kws`` argument to :func:`minimize`
:return: Residual array (generally data-model) to be minimized in the least-squares sense.
:rtype: ``None`` for normal behavior, any value like ``True`` to abort fit.


Expand Down Expand Up @@ -361,8 +363,11 @@ The Minimizer object has a few public methods:

.. automethod:: Minimizer.prepare_fit

.. automethod:: Minimizer.emcee
.. automethod:: Minimizer.brute

For more information, check the examples in 'examples/lmfit_brute.py'

.. automethod:: Minimizer.emcee

.. _label-emcee:

Expand Down Expand Up @@ -398,10 +403,10 @@ Solving with :func:`minimize` gives the Maximum Likelihood solution.::
>>> mi = lmfit.minimize(residual, p, method='Nelder')
>>> lmfit.printfuncs.report_fit(mi.params, min_correl=0.5)
[[Variables]]
a1: 2.98623688 (init= 4)
a2: -4.33525596 (init= 4)
t1: 1.30993185 (init= 3)
t2: 11.8240752 (init= 3)
a1: 2.98623688 (init= 4)
a2: -4.33525596 (init= 4)
t1: 1.30993185 (init= 3)
t2: 11.8240752 (init= 3)
[[Correlations]] (unreported correlations are < 0.500)
>>> plt.plot(x, y)
>>> plt.plot(x, residual(mi.params) + y, 'r')
Expand Down Expand Up @@ -455,18 +460,18 @@ You can see that we recovered the right uncertainty level on the data.::
median of posterior probability distribution
------------------------------------------
[[Variables]]
a1: 3.00975345 +/- 0.151034 (5.02%) (init= 2.986237)
a2: -4.35419204 +/- 0.127505 (2.93%) (init=-4.335256)
t1: 1.32726415 +/- 0.142995 (10.77%) (init= 1.309932)
t2: 11.7911935 +/- 0.495583 (4.20%) (init= 11.82408)
f: 0.09805494 +/- 0.004256 (4.34%) (init= 1)
a1: 3.00975345 +/- 0.151034 (5.02%) (init= 2.986237)
a2: -4.35419204 +/- 0.127505 (2.93%) (init=-4.335256)
t1: 1.32726415 +/- 0.142995 (10.77%) (init= 1.309932)
t2: 11.7911935 +/- 0.495583 (4.20%) (init= 11.82408)
f: 0.09805494 +/- 0.004256 (4.34%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a2, t2) = 0.981
C(a2, t1) = -0.927
C(t1, t2) = -0.880
C(a1, t1) = -0.519
C(a1, a2) = 0.195
C(a1, t2) = 0.146
C(a2, t2) = 0.981
C(a2, t1) = -0.927
C(t1, t2) = -0.880
C(a1, t1) = -0.519
C(a1, a2) = 0.195
C(a1, t2) = 0.146

>>> # find the maximum likelihood solution
>>> highest_prob = np.argmax(res.lnprob)
Expand Down Expand Up @@ -497,20 +502,20 @@ Getting and Printing Fit Reports

.. function:: fit_report(result, modelpars=None, show_correl=True, min_correl=0.1)

generate and return text of report of best-fit values, uncertainties,
Generate and return text of report of best-fit values, uncertainties,
and correlations from fit.

:param result: :class:`MinimizerResult` object as returned by :func:`minimize`.
:param modelpars: Parameters with "Known Values" (optional, default None)
:param show_correl: whether to show list of sorted correlations [``True``]
:param min_correl: smallest correlation absolute value to show [0.1]
:param show_correl: Whether to show list of sorted correlations [``True``]
:param min_correl: Smallest correlation absolute value to show [0.1]

If the first argument is a :class:`Parameters` object,
goodness-of-fit statistics will not be included.

.. function:: report_fit(result, modelpars=None, show_correl=True, min_correl=0.1)

print text of report from :func:`fit_report`.
Print text of report from :func:`fit_report`.

An example fit with report would be

Expand All @@ -519,22 +524,22 @@ An example fit with report would be
which would write out::

[[Fit Statistics]]
# function evals = 85
# data points = 1001
# variables = 4
chi-square = 498.812
reduced chi-square = 0.500
Akaike info crit = -689.223
Bayesian info crit = -669.587
# function evals = 85
# data points = 1001
# variables = 4
chi-square = 498.812
reduced chi-square = 0.500
Akaike info crit = -689.223
Bayesian info crit = -669.587
[[Variables]]
amp: 13.9121944 +/- 0.141202 (1.01%) (init= 13)
period: 5.48507044 +/- 0.026664 (0.49%) (init= 2)
shift: 0.16203676 +/- 0.014056 (8.67%) (init= 0)
decay: 0.03264538 +/- 0.000380 (1.16%) (init= 0.02)
amp: 13.9121944 +/- 0.141202 (1.01%) (init= 13)
period: 5.48507044 +/- 0.026664 (0.49%) (init= 2)
shift: 0.16203676 +/- 0.014056 (8.67%) (init= 0)
decay: 0.03264538 +/- 0.000380 (1.16%) (init= 0.02)
[[Correlations]] (unreported correlations are < 0.100)
C(period, shift) = 0.797
C(amp, decay) = 0.582
C(amp, shift) = -0.297
C(amp, period) = -0.243
C(shift, decay) = -0.182
C(period, decay) = -0.150
C(period, shift) = 0.797
C(amp, decay) = 0.582
C(amp, shift) = -0.297
C(amp, period) = -0.243
C(shift, decay) = -0.182
C(period, decay) = -0.150

0 comments on commit 22b40fd

Please sign in to comment.