Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BF: SFM prediction with mask #2041

Merged
merged 6 commits into from Jan 26, 2020
Merged

Conversation

arokem
Copy link
Contributor

@arokem arokem commented Jan 16, 2020

This fixes a bug in the way that SFM does signal prediction when fit with a mask. This is something I just stumbled upon earlier today while I was working with @dPys on nipreps/dmriprep#62. This should fix the issues we were seeing there.

@dPys
Copy link
Contributor

dPys commented Jan 16, 2020

@arokem - this is great! are there any further optimizations that can be made in runtime for the predict method of isotropic specifically? (Or maybe a better question would be what particular lines take the longest to run at the moment?)

@codecov
Copy link

codecov bot commented Jan 16, 2020

Codecov Report

Merging #2041 into master will increase coverage by 0.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2041      +/-   ##
==========================================
+ Coverage   91.13%   91.15%   +0.01%     
==========================================
  Files         246      246              
  Lines       31546    31611      +65     
  Branches     3307     3314       +7     
==========================================
+ Hits        28750    28815      +65     
  Misses       2081     2081              
  Partials      715      715
Impacted Files Coverage Δ
dipy/reconst/tests/test_sfm.py 98.71% <100%> (+0.48%) ⬆️
dipy/reconst/sfm.py 91.97% <100%> (+1.06%) ⬆️

@arokem
Copy link
Contributor Author

arokem commented Jan 16, 2020

@dPys : without a doubt, the most time is spent in this line: https://github.com/dipy/dipy/blob/master/dipy/reconst/sfm.py#L433, which is where sklearn's ElasticNet fit is being called. Unfortunately, I don't think there's much we can do about it.

@dPys
Copy link
Contributor

dPys commented Jan 16, 2020

@arokem -

A few more questions:

  1. what aspect(s) of signal prediction would be lost when reducing the ratio of L1 penalty (i.e. closer to if not equal to 0)?

  2. is the estimator fit at every voxel in the ROI? what other data dimensionality reduction strategies might improve fit and/or increase the speed of prediction? Currently the prediction with this model is probably much too slow to lend to predicting from every vector coordinate in the case of head motion correction.

  3. maybe related to (2) — could a T1w image, registered to the B0 template, be incorporated into this somehow?

And curious to hear from anyone else who might want to chime in.

@arokem
Copy link
Contributor Author

arokem commented Jan 21, 2020

what aspect(s) of signal prediction would be lost when reducing the ratio of L1 penalty (i.e. closer to if not equal to 0)?

That depends on the data. In my experience, signal prediction can be almost equally accurate with various ratios, leading to quite different fODFs.

is the estimator fit at every voxel in the ROI? what other data dimensionality reduction strategies might improve fit and/or increase the speed of prediction? Currently the prediction with this model is probably much too slow to lend to predicting from every vector coordinate in the case of head motion correction.

Yes. As suggested by @mattcieslak, one way to make this go much faster is to provide an initialized sklearn Ridge object as the input to the solver keyword argument when initializing the SparseFascicleModel object.

maybe related to (2) — could a T1w image, registered to the B0 template, be incorporated into this somehow?

I am not sure. There might be different levels of regularization that work better in different tissue types, but it would require quite a bit of empirical work to sort this out. I would be inclined to first try one ridge regularization parameter everywhere and see how that works.

@skoudoro skoudoro added this to the 1.2 milestone Jan 21, 2020
Copy link
Member

@skoudoro skoudoro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @arokem, Overall it looks good but there is only one thing I do not get. See below.

if mask is None:
mask = np.ones(data.shape[:-1], dtype=bool)
# Turn it into a 2D thing:
if len(mask.shape) > 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand this part. Can you tell us more about this part?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes: this is just to deal with the (rather unusual) case that your data is a one-dimensional vector representing one voxel's worth of data. In that case, data.shape[:-1] is () and you might run into trouble here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I see, thanks. I wonder if there is a better way to do that but It is fine like this too

@skoudoro
Copy link
Member

Anyone to review this PR? Do you want to have a last look @dPys?

I will wait until Sunday to merge it.

@skoudoro
Copy link
Member

ok, Thanks @arokem, merging

@skoudoro skoudoro merged commit c81a192 into dipy:master Jan 26, 2020
@dPys
Copy link
Contributor

dPys commented Jan 27, 2020

Hi @skoudoro -- sorry to chime back in late this convo, but thanks for merging. I took another look at all of @arokem 's edits and they indeed looked good to go.

Cheers,
@dPys

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants