-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Drop missing model inputs #183
Conversation
Also, I'm wondering how to handle |
Codecov Report
@@ Coverage Diff @@
## master #183 +/- ##
=========================================
- Coverage 76.7% 76.4% -0.31%
=========================================
Files 18 18
Lines 996 1017 +21
Branches 171 177 +6
=========================================
+ Hits 764 777 +13
- Misses 146 150 +4
- Partials 86 90 +4
Continue to review full report at Codecov.
|
fitlins/interfaces/nistats.py
Outdated
if v != 0 and n not in all_regressors] | ||
for row in contrast['weights']]) | ||
if missing: | ||
weights = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we continue, then the contrast list is just shorter, and you don't need to deal with None
s.
weights = None | |
continue |
Would this work? We'll need to make sure that there isn't some other place where we're separately generating a list that should match 1-1, but no longer would.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this not a sufficient place to block the production contrasts involving missing regressors?
I have to test more, but this seems to have worked for me. The main related issue is that if only one image per run remains, a second level moel can't be run, and it will crash. This is the related issue: neuroscout/neuroscout#641 |
If I'm deciphering correctly, as far as fitlins is concerned, we need to add pass through for random effects models, and implemented fixed effect models as well? The rest of the issue is more about the spec and how to specific random vs fixed effects. |
Yes... I need to re-read that, but it's also time to make dinner. |
Sounds more appealing, tbh. |
Allright, cool, this is ready for review. We can handle the fixed/random stuff in another PR, I think. That may end up being more complicated. Well at least the spec stuff, we could probably bang out the pass-through fairly quickly (wonder if that should be the default option + a warning, and if it needs a CLI flag). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some very preliminary comments. I'll get back to this this afternoon.
fitlins/interfaces/utils.py
Outdated
output_spec = DynamicTraitedSpec | ||
|
||
def __init__(self, fields=None): | ||
super(MergeAll, self).__init__() | ||
def __init__(self, fields=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop_missing
is relevant to the purpose of this interface in the overall workflow, but isn't very obvious in terms of functionality when considered on its own. What if we invert it to a check_lengths
kwarg and just save it as an instance variable?
def __init__(self, fields=None, check_lengths=True):
super(MergeAll, self).__init__()
self._check_lengths = check_lengths
...
Then you can drop all of the input spec changes.
fitlins/interfaces/nistats.py
Outdated
if v != 0 and n not in all_regressors] | ||
for row in contrast['weights']]) | ||
if missing: | ||
weights = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this not a sufficient place to block the production contrasts involving missing regressors?
4606e4f
to
a79f83f
Compare
I reverted the commit about pass through since it seems we need to implement |
@effigies if the tests pass, and you approve, this should be ready. i'll tackle adding |
The main conflict is where to put the |
I think it needs to go in both DesignMatrix and FirstLevelModel, right? I'll review this now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. Looks like you're not using drop_missing
in FLM, so I guess it all just falls out of the design matrix?
fitlins/interfaces/nistats.py
Outdated
@@ -37,6 +43,9 @@ def _run_interface(self, runtime): | |||
import nibabel as nb | |||
from nistats import design_matrix as dm | |||
info = self.inputs.session_info | |||
drop_missing = self.inputs.drop_missing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool(Undefined) == False
.
drop_missing = self.inputs.drop_missing | |
drop_missing = bool(self.inputs.drop_missing) |
fitlins/interfaces/nistats.py
Outdated
@@ -37,6 +43,9 @@ def _run_interface(self, runtime): | |||
import nibabel as nb | |||
from nistats import design_matrix as dm | |||
info = self.inputs.session_info | |||
drop_missing = self.inputs.drop_missing | |||
if not isdefined(drop_missing): | |||
drop_missing = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then remove these lines.
contrast_metadata.append( | ||
{'contrast': name, | ||
'stat': contrast_type, | ||
**out_ents} | ||
) | ||
maps = flm.compute_contrast( | ||
weights, contrast_type, output_type='all') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it just made sense to me to do that right before looping over it. No functional difference.
I think only in |
I'm getting an error I don't understand:
|
Are you reusing a working directory? |
No, I'm not. This doesn't seem to happen on |
Ah, I see what it is, |
I set |
From the nipype perspective, the contents of the outputs will not purely depend on the contents of the inputs, so you might get a situation where you re-run, adding or removing a You can still keep the |
Okay, sure, then what I don't understand is why we don't do the same for |
Yeah, strictly speaking, probably. It doesn't change the outputs, but instead changes the error condition, so it's a little less problematic. Also by the time things show up there, they will have had to go through DesignMatrix. So... whichever you want. |
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Okay, your suggestions made sense, and I applied them. Regarding the input spec, I'll leave it as we have it. I'll test this out on NS tomorrow. |
fitlins/interfaces/nistats.py
Outdated
variance_maps.append(_variances[0]) | ||
zscore_maps.append() | ||
pvalue_maps.append() | ||
stat_maps.append() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these three lines will cause problems?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes, I thought I reverted that commit.
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. Assuming tests pass, I'm good with this. This has been tested for your use case?
Hmm, there seems to be a minor problem with the outputting of plots now, let me investigate |
ec9d42c
to
b78e34d
Compare
Okay, tested and working. My linter suggested I change |
Fixes #176
--drop-missing
). By default, it crashes.FirstLevelModel
, empty columns are detected in design matrix.MergeAll
.