Skip to content

# InequalityConstraints

wojdyr edited this page Oct 7, 2011 · 4 revisions

# Inequality Constraints

The quotation below is from: Peter Gans, Data Fitting in the Chemical Sciences by the Method of Least Squares, John Wiley & Sons, 1992, chapter 5.2.2.

Before looking at ways of dealing with inequality constraints we must ask a fundamental question: are they necessary? In the physical sciences and in least-squares minimizations in particular, inequality constraints are not always justified. The most common inequality constraint is that some number that relates to a physical quantity should be positive, pj > 0. If an unconstrained minimalization leads to a negative value, what are we to conclude? There are three possibilities; (a) the refinement has converged to a false minimum; (b) the model is wrong; (c) the parameter is not well defined by the data and is not significantly different from zero. In each of these three cases a remedy is at hand that does not involve constrained minimization: (a) start the refinement from good first estimates of the parameters; (b) change the model; (c) improve the quality of the data by further experimental work. If none of these remedies cure the problem of non-negativity constraints, then something is seriously wrong with the patient, and constrained minimization will probably not help. [...]

The simplest way to deal with these constraints (if it is absolutely unavoidable) is by a change of variable. The following example illustrates this point:

q = p2 or q = ep ensure that q ≥ 0

q = sin p ensures that -1 ≤ q ≤ 1

q = p + z2 ensures that qp

However, these changes of variable may have a serious disadvantage. If the model is linear in a parameter, it is non-linear in its square root (or logarithm or arcsine). This will cause the refinement to need more iterations and may also create multiple minima. Therefore the change of variable should be used with caution. [...]

Constraints may be introduced, on an ''ad hoc'' basis, if the unconstrained minimization would cause them to be violated. For example, if the parameter shifts are such that the new parameters would violate a non-negativity constraint, the shift vector can be reduced in length so as to reduce the largest element to -0.9 times the corresponding parameter value; that parameter would then be reduced to a tenth of its former value. Of course the difficulty with ''ad hoc'' methods is that they do not come with guarantees, and they may fail at any time. Mathematicians have devoted some considerable time and effort to the problem of constrained minimization, as the interested reader will find from the references. Outside the physics sciences the problem is of major importance. However, we will end this section as we began, by emphasizing that if the constraints can be avoided by reformulating the model then that should be done.

Fityk does not have a dedicated support for inequality constraints (yet).

A change of variable, described above, can be done either by manually setting the parameters, e.g.:

```\$sqrt_h = ~10
%func.height = \$sqrt_h^2
```

or by defining a new function type, e.g.:

```define LorentzianPositive(sqrt_height=sqrt(height), center, hwhm) = Lorentzian(sqrt_height^2, center, hwhm)
```

Such a new function will appear in the function list in the GUI and can be used like built-in functions.

Something went wrong with that request. Please try again.