New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non explicit error message in lobpcg #10974
Comments
@glemaitre Yes, it was really intended in scipy/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py Lines 105 to 108 in 91f4394 not not fail early. It appears that the error is coming from LinearOperator, evaluating Having said that, let me comment that your script is not using lobpcg well for svd: |
Yes that's why I am expecting an error to be raised. |
Well, it does give an error :-) However, lobpcg should still be expected to run on this example without errors, in my opinion, just given a symmetric matrix even with mostly zero eigenvalues. |
OK, I get it and I have found a simple example with the same issue:
always fails, while with
Let me think how to handle it best... @glemaitre - thanks for finding and reporting such a case! |
I am using scikit-learn spectral clustering for my clustering problem. I use the following configuration for the spectral clustering
but I get the error
when I use any idea why? |
@FTB-B I could investigate if you provide a reproducible example, please. The issue should not be related to clusterqr, so please run with a different already available in scikit-learn function for labeling and submit a formal bug report with a ping to me. |
I ran into the same problem as @glemaitre. I have a (504, 504) First, I think the line 105 Concerning the fix, my matrix has 78 non-zero singular values, but it crashes for import numpy as np
import scipy.linalg as lg
import scipy.sparse.linalg as slg
A = np.load("data_crash.npy")
u0, s0, v0 = lg.svd(A)
x = lg.norm(A)
for k in range(1, 78):
try:
u1, s1, v1 = slg.svds(A, k=k, solver="lobpcg")
u2, s2, v2 = slg.svds(A, k=k, solver="arpack")
print(k, "success")
print("dense truncation", lg.norm(u0[:,:k] * s0[:k] @ v0[:k] - A) / x)
print("arpack truncation", lg.norm(u2 * s2 @ v2 - A) / x)
print("lobpcg truncation", lg.norm(u1 * s1 @ v1 - A) / x)
except ValueError as err:
print(k, err) gives me
|
@ogauthe this is a tricky matrix for lobpcg due to so many 0 eigenvalues; see #10974 (comment) One simple fix could be just adding a small-magnitude random matrix to A to turn exact zeros into small nonzero values. |
Thank you for your answer. Beyond the error message when the function crashes, I think there is a real problem here. For I can open a separate issue for this, a wrong result is much worse than a crash for me. |
I have not paid attention to the values, sorry. Wrong values for small k is surely very odd, and unrelated to many zeros. Please submit a separate bug report with a ping to me and upload the matrix to reproduce the problem. |
When |
In scikit-learn, we are benchmarking the integration of
lobpcg
to compute somesvd
. On an ill-posed problem, I got a weird error about array beingNone
instead of aLinAlgError
.The issue is coming from the following line:
scipy/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py
Lines 105 to 108 in 91f4394
The
LinAlgError
is catched. Then the arrays are set toNone
for some reason but this state will not allow to resolve any issue later and just failed with a more cryptic error (see traceback below).I was wondering if it was really intended to not fail early?
Reproducing code example:
Here, the matrix
X
should be transposed to be used inlobpcg
and it is the reason for having an ill-posed problem which will raise an error.Error message:
I was expected something like:
Scipy/Numpy/Python version information:
The text was updated successfully, but these errors were encountered: