Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] fix afni.allineate interface #2502

Merged
merged 7 commits into from Apr 24, 2018
Merged

Conversation

vanandrew
Copy link
Contributor

@vanandrew vanandrew commented Mar 20, 2018

Fixes #2499, #2216

Changes proposed in this pull request
-See linked issue #2499

@codecov-io
Copy link

codecov-io commented Mar 20, 2018

Codecov Report

Merging #2502 into master will increase coverage by <.01%.
The diff coverage is 40%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2502      +/-   ##
==========================================
+ Coverage   66.87%   66.87%   +<.01%     
==========================================
  Files         327      327              
  Lines       42448    42439       -9     
  Branches     5266     5263       -3     
==========================================
- Hits        28387    28382       -5     
- Misses      13362    13365       +3     
+ Partials      699      692       -7
Flag Coverage Δ
#smoketests 50.83% <50%> (-0.01%) ⬇️
#unittests 64.18% <40%> (ø) ⬆️
Impacted Files Coverage Δ
nipype/interfaces/afni/preprocess.py 81.26% <0%> (+0.23%) ⬆️
nipype/interfaces/base/core.py 88.87% <100%> (+0.03%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 27b33ef...cda56e9. Read the comment docs.

@effigies effigies added this to the 1.0.3 milestone Mar 26, 2018
Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm tagging the changes I suggested in vanandrew#1 here.

I also made a couple other changes I saw while looking into this. This interface can be substantially reduced.

@@ -219,6 +219,7 @@ class AllineateInputSpec(AFNICommandInputSpec):
desc='output file from 3dAllineate',
argstr='-prefix %s',
genfile=True,
hash_files=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch.

@@ -219,6 +219,7 @@ class AllineateInputSpec(AFNICommandInputSpec):
desc='output file from 3dAllineate',
argstr='-prefix %s',
genfile=True,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can replace genfile with the newer name_template.

-        genfile=True,
+.       name_template='%s_allineate',
+.       name_source='in_file',

In the PR I sent you, I've resolved the issue with it not respecting the xor tag.


if self.inputs.out_weight_file:
if isdefined(self.inputs.out_weight_file):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Undefined evaluates to False, so this isn't necessary, here.

@@ -519,7 +525,7 @@ def _list_outputs(self):

def _gen_filename(self, name):
if name == 'out_file':
return self._list_outputs()[name]
return self._gen_outfilename()
return None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should be able to remove this method entirely, if we're switching to name_template/name_source.

@effigies
Copy link
Member

Fast merge. 😃

>>> allineate.inputs.reference = 'structural.nii'
>>> allineate.inputs.nwarp_fixmot = ['X', 'Y']
>>> allineate.cmdline
'3dAllineate -source functional.nii -nwarp_fixmotX -nwarp_fixmotY -prefix functional_allineate -base structural.nii'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A question: Is -prefix functional_allineate the correct default, or should it be functional_allineate.nii?
If it should have the extension, we should add keep_extension=True to the out_file trait spec.

@tsalo @salma1601 You might also be good people to chime in here.

# Do not generate filename when excluded by other inputs
if trait_spec.xor and any(isdefined(getattr(self.inputs, field))
for field in trait_spec.xor):
return retval
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@satra @oesteban Does this constitute a full fix of #2506?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@effigies - this is a good addition, but i'm not sure this addresses #2506, which does not have any xor. isn't it the case in #2506 that out_file should be populated if in_files is defined and set to some default otherwise?

i think this PR is fine, just not sure if it addresses #2506.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right. I had it kind of backwards, but a similar fix to this should resolve #2506. I'll merge and propose a quick PR for that one.

@effigies
Copy link
Member

Ah, sorry. Forgot to make specs. Pushing it directly.

Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Will merge in 24 hours unless someone spots something and makes noise.

@effigies effigies merged commit fd49fd0 into nipy:master Apr 24, 2018
shnizzedy pushed a commit to FCP-INDI/C-PAC that referenced this pull request Nov 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants