Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement loss function for parameter fitting #99

Closed
matthiaskoenig opened this issue Mar 17, 2021 · 0 comments
Closed

Implement loss function for parameter fitting #99

matthiaskoenig opened this issue Mar 17, 2021 · 0 comments
Assignees
Labels
feature New feature or request parameter estimation

Comments

@matthiaskoenig
Copy link
Owner

https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html

lossstr or callable, optional

Determines the loss function. The following keyword values are allowed:

        ‘linear’ (default) : rho(z) = z. Gives a standard least-squares problem.

        ‘soft_l1’ : rho(z) = 2 * ((1 + z)**0.5 - 1). The smooth approximation of l1 (absolute value) loss. Usually a good choice for robust least squares.

        ‘huber’ : rho(z) = z if z <= 1 else 2*z**0.5 - 1. Works similarly to ‘soft_l1’.

        ‘cauchy’ : rho(z) = ln(1 + z). Severely weakens outliers influence, but may cause difficulties in optimization process.

        ‘arctan’ : rho(z) = arctan(z). Limits a maximum loss on a single residual, has properties similar to ‘cauchy’.

If callable, it must take a 1-D ndarray z=f**2 and return an array_like with shape (3, m) where row 0 contains function values, row 1 contains first derivatives and row 2 contains second derivatives. Method ‘lm’ supports only ‘linear’ loss.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request parameter estimation
Projects
None yet
Development

No branches or pull requests

1 participant