-
Notifications
You must be signed in to change notification settings - Fork 536
Description
Summary
In compute_dvars we remove zero-variance voxels:
nipype/nipype/algorithms/confounds.py
Lines 1045 to 1054 in 3e7e613
| # Robust standard deviation (we are using "lower" interpolation | |
| # because this is what FSL is doing | |
| func_sd = ( | |
| np.percentile(mfunc, 75, axis=1, interpolation="lower") | |
| - np.percentile(mfunc, 25, axis=1, interpolation="lower") | |
| ) / 1.349 | |
| if remove_zerovariance: | |
| mfunc = mfunc[func_sd != 0, :] | |
| func_sd = func_sd[func_sd != 0] |
However, if func_sd is small but nonzero, this will propagate to diff_sdhat:
nipype/nipype/algorithms/confounds.py
Lines 1061 to 1063 in 3e7e613
| # Compute (predicted) standard deviation of temporal difference time series | |
| diff_sdhat = np.squeeze(np.sqrt(((1 - ar1) * 2).tolist())) * func_sd | |
| diff_sd_mean = diff_sdhat.mean() |
Which then can explode the values in
nipype/nipype/algorithms/confounds.py
Lines 1078 to 1080 in 3e7e613
| diff_vx_stdz = np.square( | |
| func_diff / np.array([diff_sdhat] * func_diff.shape[-1]).T | |
| ) |
If sufficiently small, we will overflow float32.
Proposal
We should set a default threshold for "zerovariance". I think 1e-10 is more than small enough. In the example dataset I have, only one additional voxel is caught by this less stringent threshold, and would avoid the failure.
Will propose a fix soon, but wanted to give people a chance to comment.