New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scipy.optimize.curve_fit leads to unexpected behavior when input is a standard python list #3037
Comments
from what I remember 1] curve_fit doesn't directly care about what x is, it could be anything (an instance of a class that holds some data) I'm trying to see what's going on with the starting value. |
Only a starting value == 1.0 can cause this result. Anything else ends up with a shape mismatch.
|
This should be fixed in |
|
I don't think we should "fix" the automatic conversion, x can be anything that the user wants, a sequence of anything that the user wants to feed to her function. With newer numpy we would just get useless object arrays with asarray or asanyarray. Given the docstring and that I'm not using curve_fit, I don't care much if this reduction in functionality is introduced to make curve_fit a bit more "foolproof". |
ydata has to be the target data array_like, because curve_fit is calculating the error itself But xdata is just transported to the function, and not used by curve_fit itself. |
At least we can do something like this then:
As well as fix the docs to state that xdata and ydata can be of different types. |
I'd vote for being foolproof over allowing for exotic use cases with non-numeric |
I don't see how passing extra curve_fit is a convenience function, so foolproof high priority |
Proposed fix in gh-3166. |
BUG: make curve_fit() work with array_like input. Closes gh-3037.
Inspired by the Stack Overflow question:
http://stackoverflow.com/q/19713689/249341
A minimal working example of the potential pitfall:
Gives:
It's clear what is going on here, when
x
is a standard python lista*x
makesa
copies of that list. For someone new to python, and is working primarily with the scipy/numpy stack this is an extremely strange and unexpected behavior (I've watched more than one person struggle with this).Suggestions:
1] Is there any reason not to cast python lists into numpy arrays? If not, then automatically cast on input
2] Check the size of 'x', via
len
. If it changes size during optimization, perhaps give a warning?3] At the very least, update the docs to know that there won't be an upcast?
The text was updated successfully, but these errors were encountered: