-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LSMR fails on some random values when providing an x0 #8163
Comments
docker run -it clearlinux/machine-learning |
Could you also add which architecture are you running the tests, 32- or 64- bit, which Linux and so on? |
assert_almost_equal tests 1e-7 precision but the default precision for lsmr is 1e-6, so I think it is not expected the solutions are the same. In particular, it's well known that floating-point reproducibility even for the same problem is not obtained with modern optimized linear algebra libraries. |
I guess the test should either set atol/btol appropriately or relax the assert_close tolerance |
It's a 64-bit Clearlinux 19090 with kernel 4.13 and Intel Core i5-6260U. I have also seem the same behavior in Fedora 26, but the failing values of G and b across distributions. I had to run the test code multiple times in Fedora to get failing values of G and b. |
We should also fix the random seed to be able to reproduce such problems easily. |
In my computer the isolve.test() fails in testing lsmr() at this point: https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/isolve/tests/test_lsmr.py#L85 due to x_ref and x arrays are not almost equal.
Since the matrix G and array b are randomly defined in lsqr.py I decided to test the code running a stand alone script multiple times, assuming that it will generate a different G and b values each time.
I can see that sometimes the test code passes and sometimes it did not:
I also tried modifying the size of G and b, I realize that it is reproducible only when size >= 25 (The default is 35) and becomes more often in sizes bigger than 55. I saved the values of G and b when the test fails and hardcoded it into the script.
My assumption is that the default tolerance is not appropiate.
Reproducing code example:
Error message:
Scipy/Numpy/Python version information:
The text was updated successfully, but these errors were encountered: