Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: AFNI TSNR calculation, ADD: BlurToFWHM, ADD: testing outputs against reference #280

Merged
merged 61 commits into from
Feb 25, 2021

Conversation

Shotgunosine
Copy link
Collaborator

@Shotgunosine Shotgunosine commented Jan 28, 2021

Sorry about the size of this one, the number of changes kinda of got away from me.
First big part of this is that I've fixed the TSNR calculation in the AFNI estimator.
Second part is I added a new option to the AFNI estimator to use the 3dBlurToFWHM.
Third big part is that I've created a set of reference outputs for three different run settings (https://gin.g-node.org/shotgunosine/fitlins_tests). I also added estimator and smoothing information to the dataset description, but I can pull that out once @adelavega's PR is merged to master.

@pep8speaks
Copy link

pep8speaks commented Jan 28, 2021

Hello @Shotgunosine, Thank you for updating!

Cheers! There are no style issues detected in this Pull Request. 🍻 To test for issues locally, pip install flake8 and then run flake8 fitlins.

Comment last updated at 2021-02-12 14:40:28 UTC

@adelavega
Copy link
Collaborator

adelavega commented Feb 1, 2021

You can run pytest with an "append" mode. That might help. see: https://github.com/PsychoinformaticsLab/pliers/blob/master/.github/workflows/python-package.yml

--cov-append

@Shotgunosine
Copy link
Collaborator Author

I don't think that should be it, each of the three tests (afni_smooth, afni_blurto, and nistats_smooth) is being run in a separate job on circleci, so they are each uploaded to codecov separately. As far as I can tell from the individual xml reports, they indicate that the lines have been run.

@adelavega
Copy link
Collaborator

Hmm, I'm not sure then sorry.

@adelavega adelavega self-requested a review February 1, 2021 20:43
Copy link
Collaborator

@adelavega adelavega left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM.

From a high level only quibble I have is that the three test conditions differ in estimator and smoothing, but the latter seems less important overall to fitlins. Ideally we would test more permutations of fitlins parameters. That said, that seems like more of a reach goal, and this is already a nice improvement on what we have.

I'm sure @effigies will have more to say than me though.

.circleci/config.yml Outdated Show resolved Hide resolved
fitlins/interfaces/afni.py Outdated Show resolved Hide resolved
@Shotgunosine
Copy link
Collaborator Author

@effigies, I've finally got coverage working. Turns out the issue was pointing at tests in one directory (/src/fitlins) and calling fitlins from another location ($CONDA_PREFIX/bin/fitlins). This resulted in two copies of the source tree being in the coverage reports, one that had hits and one that didn't. For whatever reason, codecov interpreted that as meaning the lines had no hits. I changed the test location to point at $CONDA_PREFIX/lib/python3.6/site-packages/fitlins/tests and it's reporting coverage correctly now.

@effigies
Copy link
Collaborator

Cool. It might be that we should have used --pyarg fitlins or something, but that can be fiddled with later.

I will try to review this today, but it's somewhat meeting heavy.

vol_labels = parse_afni_ext(rbetas)["BRICK_LABS"].split("~")
mat = pd.read_csv(self.inputs.design_matrix, delimiter="\t", index_col=0)
# find the name of the constant column
const_name = mat.columns[(mat != 1).sum(0) == 0].values[0]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this choke if there are non-steady-state volumes? Generally the constant should be zeroed out like everything else if you're censoring a volume.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to verifying that zeroing out everything else or not doesn't impact the estimates of the other coefficients, I also checked to see if we should expect a design matrix that doesn't have an all 1's constant column. We're getting the design matrix from nilearn's make_first_level_design_matrix function, which adds a an all 1s constant column as part of the drift model function, even if the drift model is None:
https://github.com/nilearn/nilearn/blob/d2ad41a9fa0aa7041c7e826518c6c3692a92a2e1/nilearn/glm/first_level/design_matrix.py#L122-L173

Based on that, I think this logic is fine.

Comment on lines +137 to +163
ref_nans = np.isnan(ref_data)
out_nans = np.isnan(out_data)

if (ref_nans == out_nans).all():
ref_nonan = ref_data[~ref_nans]
out_nonan = out_data[~out_nans]

diff = np.abs(ref_nonan - out_nonan)
rel_diff = (diff) / ((np.abs(ref_nonan) + np.abs(out_nonan)) / 2)
if max_abs and max_rel:
over = (diff > max_abs)
diff = diff[over]
rel_diff = rel_diff[over]
if (rel_diff > max_rel).any():
res.append(f"Relative difference (max of {rel_diff.max()})"
f" greater than {max_rel} for {ref_nii} and {out_nii}.")
elif max_rel:
if (rel_diff > max_rel).any():
res.append(f"Relative difference (max of {rel_diff.max()})"
f" greater than {max_rel} for {ref_nii} and {out_nii}.")
else:
if ((diff > max_abs).any()):
res.append(f"Absolute difference (max of {diff.max()})"
f" greater than {max_abs} for {ref_nii} and {out_nii}.")

else:
res.append(f"{ref_nii} nans don't match {out_nii} nans.")
Copy link
Collaborator

@effigies effigies Feb 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just

np.allclose(ref_data, out_data, rtol=max_rel, atol=max_abs, equal_nan=True)

Ref: https://numpy.org/doc/stable/reference/generated/numpy.allclose.html

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's different from numpy allclose. I was trying to mimic the functionality of nibabel diff, but add in tolerance for nans. As far as I can understand nibabel diff handles relative and max specifications differently from numpy allclose. Allclose uses this:
absolute(a - b) <= (atol + rtol * absolute(b))

but nibabel diff uses the absolute tolerance as a mask and only tests the relative differences on elements where the difference is less than the absolute tolerance.

I'm not sure if those work out to be the same thing or not though.

Copy link
Collaborator

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the whole looks good. Left a couple small comments.

@effigies effigies merged commit abcdf6b into poldracklab:master Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants