New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matlab 2018a: svd implementation changed and now crashes #72
Comments
The link you show is for svds, "sparse". Matlab 2016b is also a valid result. The covariance is effectively singular, with the 64th deep in the bit noise. But it is not precisely "zero" (at least in 2016b), which I mention because the SVD function can be crashed, rarely, if several singular values are perfectly zero, which only happens in theoretical cases. I have not seen the SVD function fail to converge in experimental data. How was this Covariance matrix generated? I'll update my license to 2018a and see if I get the same problem. |
Very interesting discussion here: https://www.mathworks.com/matlabcentral/answers/285861-different-svd-results-with-r2015b-and-r2016a version -blas may give the different libraries used across versions and architectures. The rank of Cov, in this example, is actually 48 out of 64. Singular values 49-63 are about 10^-25, 64 is 10^-28, relative to the first singular value at 10^-11. Relative Epsilon is 10^-16, so eigenvalues 49-63 are on the hairy edge of being in the bit noise. If the libraries changed so that they are now considered to be zero, then the SVD is challenged with finding eigenvectors in a null space of 16 zeros. Hence the "failure to converge", which I have seen happen before. If you have time before I install 2018a, try Sn = svd(Cov) vs [Un,Sn] = svd(Cov,0) vs [Un,Sn] = svd(Cov,'econ'). I don't know if "econ" and 0 trigger different conversion routines, but the first case should still work anyway, since it only finds the singular values, and does not iterate to find the eigenvectors, which is where I believe the SVD function fails to converge. |
The data comes from the dataset attached to an article describing an EEGLAB+Brainstorm pipeline to process simple EEG experiments (soon to be published in the same Frontiers research topic as the new brainstorm article). The noise covariance matrix is estimated from the pre-stim baseline of the individual trials, as indicated in our guidelines: http://neuroimage.usc.edu/brainstorm/Tutorials/NoiseCovariance#Variations_on_how_to_estimate_sample_noise_covariance. Here is the dataset, ready to load in Brainstorm with File > Load protocol > Load from zip file: Wee need to add ways to handle correctly these cases. Right now it just crashes abruptly with red errors in the command window. If you find a way to adjust the code (or at least to return a readable error that tells the user what should be fixed in the data). Output in Matlab 2018a of the commands you suggested:
|
More background. Occasionally happens in Python. http://thread.gmane.org/gmane.comp.python.numeric.general/45614 where their solution was to up the number of iterations. But in general, it has been difficult to get this convergence error. I'll keep digging. |
Confirmed problem here, "SVD did not converge", which shouldn't happen. PC: Windows 7 Service Pack 1 - X64, Intel i5-4590 Matlab 2018a on a PC: Matlab 2016a on same PC: Last 4 singular values (raw): 2016a 2018a Scaled to first singular value: 2016a 2018a and eps reports out the same as 2.220446049250313e-16 So there are subtle differences in the noise floor below eps, but that should not be the issue with the "SVD not converge here". Testing by adding and eps to the whole matrix [Un,Sn] = svd(Cov); % does not converge So extreme sensitivity here, have we somehow created a degenerate case for our Cov? Add a tiny amount (eps(S(1)) = 4.038967834731580e-28) to each diagonal. At least "try catch" works: FAIL = 0; yields "Failed 31 out of 64" So we have a remarkably sensitive and degenerate matrix that should nonetheless not be crashing SVD, which has been a robust algorithm for decades. I'm guessing something in the library changed. Definitely worth reporting to Mathworks ASAP. |
Same problem on iMac (iOS 10.13.3 High Sierra, Intel Core i7), but it is running the same library |
I posted a bug report on the mathworks website. |
Response from the Mathworks:
|
Additional comment from the technical support:
|
Except that complicates the detection of rank, since we just moved all
singular values up off the noise floor. However, our detection of effective
rank is supposed to be more robust than relying on bit noise.
And we calculate the SVD twice in this fix.
We could also put “try” “catch” around these, or calculate the singular
vectors separately from the singular values.
Actually, that may be the way to substitute an SVD function if the user is
using 2018a. Sketch:
If 2018a and nargout > 1,
S = svd(A);
[U,~,V] = svd(A + eps(S(1))*eye(length(S)));
return U, diag(S), V
Expand to handle cases of vargin options (e.g. economy), nargout of 2 vs 3 for efficiencies.
But can we really put in our own SVD, without taking a performance hit? We call the SVD 10s of thousands of times in a single source estimation, relying on calculations of subspaces from matrices that are often singular. For example, the MEG spherical head model is dimension three rank two, will this sometimes not converge, in our calculation of 15,000 cortical points?
If not 2018a, we would want the other versions of Matlab to not have to do these additional calls sketched above. Is there a way to "pass through" the SVD arguments with very little performance hit, if not 2018a? Or, if the user is running 2018a, require them to put into their private path an alternative SVD calculation, such as above, so that only these users take the performance hit?
|
Yes, it's easy to check for the version of Matlab. |
Agreed that it's easy to test the version of Matlab. But we call the SVD in many places, so I don't see rewriting the existing code everytime we call SVD, but rather writing a local version of SVD along the lines sketched above. But how do we do that without taking a hit on all of the versions of Matlab? In other words, I'm unsure how simple it is to write a code that just passes through the argout and argin with only a minor performance hit, except in the case that it is 2018a. Is that straightforward? |
We could replace the svd() calls with a bst_svd() that redirects to the appropriate function. |
Would "bst_svd" be preferred over shadowing svd with a custom version (but I see Matlab generates warning messages when you try this with SVD)? I'll leave that to your expert judgement. Presumably they will fix this bad problem in svd and future versions of Matlab won't have it. So we're should only be looking at 2018a as the problem. Francois, I can flesh out the above function, or do you have enough to go on to write one? I'm not facile at passing vargin and vargout, but SVD doesn't have a lot of options anyway. It is important to distinguish 'econ' vs 0 as an option, and to distinguish two vs three outputs, for efficiency, since we sometimes call svd with very wide matrices, for which we don't want the third output. But in general I just want to call SVD. Is it an option to tell users to simply stay away from 2018a until this bug is fixed? |
Yes, it's definitely better to have a separate function bst_svd. If you know what to do, please write this function. I'm not so familiar with these concepts. To test for the number of output arguments, use for instance:
To pass all the input arguments to a function, see for instance bst_call.m:
Don't worry about efficiency. Adding one or two tests and one extra level of function calls will be almost undetectable, unless you're calling this function millions of times. |
Bug still exists in 2018a, but was fixed in 2018b. |
The following functions now crash with some specific noise covariance matrices:
bst_inverse_linear_2016, line 915
bst_inverse_linear_2018, line 1093
Matlab 2017b:
Matlab 2018a:
Example file:
Cov.zip
The only documentation I could find about changes in this functions is the following, but it is about input parameters, it doesn't say anything about the algorithms changing.
https://fr.mathworks.com/help/matlab/release-notes.html?rntext=svd&startrelease=R2017b&endrelease=R2018a&groupby=release&sortby=descending&searchHighlight=
The text was updated successfully, but these errors were encountered: