New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DTI `min_signal` #498

Merged
merged 14 commits into from Dec 12, 2014

Conversation

Projects
None yet
3 participants
@arokem
Member

arokem commented Dec 11, 2014

This should deal with issues raised here:

#496

without reverting. Still needs a bit of a refactor, to remove spurious checks and references to min_signal

MrBago and others added some commits Dec 8, 2014

RF: Deal with very small signal, and with all-zero signal.
These two cases should be separated: if s0 is non-zero, and all of the d_sig is
zero, the eigenvalues should be high (3.0 per default, the diffusion of water
in water at 37 deg C). If all of the signals are 0, we set the eigenvalues and
eigenvectors to 0.
BF: Set the zero-d_sig case to have np.eye(3) evecs.
Also, deal with setting of min_signal. This still needs a little bit more
cleanup.

@arokem arokem changed the title from WIP: dti min to Obviating DTI `min_signal` Dec 11, 2014

@arokem

This comment has been minimized.

Member

arokem commented Dec 11, 2014

I believe this is done now. I have removed the min_signal kwarg altogether, and would be happy to listen to arguments for leaving it around. We should be able to handle that with a couple of lines of code in the fit method of TensorModel

@arokem

This comment has been minimized.

Member

arokem commented Dec 11, 2014

This supersedes #496 and #492.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 11, 2014

Hi @arokem,

The change in the min_signal affects the cross_validation code and the test stalls in Travis.

Test k-fold cross-validation ... /home/travis/build/nipy/dipy/venv/lib/python2.7/site-packages/dipy/reconst/dti.py:1139: RuntimeWarning: divide by zero encountered in log
log_s = np.log(sig)
No output has been received in the last 10 minutes, this potentially indicates a stalled build or something wrong with the build itself.

Apart from that I would like to understand why the min_signal parameter should be part of the fit from now on? The previous idea was to calculate that automatically. Is this idea now looking impossible?

@arokem

This comment has been minimized.

Member

arokem commented Dec 11, 2014

Yep - I am dealing with the cv code that throws that error. Looks like we
can't get rid of min_signal after all. Will check this in in 10 minutes.

The previous idea was repeated in each one of the fit_method functions
separately. May as well do it once up-front.

On Thu, Dec 11, 2014 at 2:25 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Hi @arokem https://github.com/arokem,

The change in the min_signal affects the cross_validation code and the
test stalls in Travis.

Test k-fold cross-validation ...
/home/travis/build/nipy/dipy/venv/lib/python2.7/site-packages/dipy/reconst/dti.py:1139:
RuntimeWarning: divide by zero encountered in log
log_s = np.log(sig)
No output has been received in the last 10 minutes, this potentially
indicates a stalled build or something wrong with the build itself.

Apart from that I would like to understand why the min_signal parameter
should be part of the fit from now on? The previous idea was to calculate
that automatically. Is this idea now looking impossible?


Reply to this email directly or view it on GitHub
#498 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Dec 11, 2014

There are some more issues. Might take me a little bit longer...

On Thu, Dec 11, 2014 at 2:29 PM, Ariel Rokem arokem@gmail.com wrote:

Yep - I am dealing with the cv code that throws that error. Looks like we
can't get rid of min_signal after all. Will check this in in 10 minutes.

The previous idea was repeated in each one of the fit_method functions
separately. May as well do it once up-front.

On Thu, Dec 11, 2014 at 2:25 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Hi @arokem https://github.com/arokem,

The change in the min_signal affects the cross_validation code and the
test stalls in Travis.

Test k-fold cross-validation ...
/home/travis/build/nipy/dipy/venv/lib/python2.7/site-packages/dipy/reconst/dti.py:1139:
RuntimeWarning: divide by zero encountered in log
log_s = np.log(sig)
No output has been received in the last 10 minutes, this potentially
indicates a stalled build or something wrong with the build itself.

Apart from that I would like to understand why the min_signal parameter
should be part of the fit from now on? The previous idea was to calculate
that automatically. Is this idea now looking impossible?


Reply to this email directly or view it on GitHub
#498 (comment).

arokem added some commits Dec 12, 2014

RF + TST: Added back min_signal, testing with DTI data (not DSI data!).
DTI requires a b0 measurement. This did not exist in the data that was
previously used for testing. For that reason, the eigenvalues in these tests
were always exceedingly low (set by min_diffusivity, I believe). I believe that
it makes more sense to test the model fitting with DTI data. Single b-value and
including a b0 measurement.

@arokem arokem changed the title from Obviating DTI `min_signal` to DTI `min_signal` Dec 12, 2014

@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

OK - the new refactor adds back min_signal. I believe that we still need it for situations in which one of the d_sig measurements is 0, and the others aren't. In this case, this measurement needs to be set to something other than 0, because otherwise it's entered into the fit procedure as log(0), and solving the linear equation becomes impossible (results in a LinearAlgebraError from the SVD).

At any rate - I hope that Travis agrees to test this for us. I hope this should be it, but take a look and let me know what you think.

@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

Update: I have also run the tensor-related examples in the docs. They run and seem to produce reasonable results.

dm = dti.TensorModel(gtab, 'LS')
dtifit = dm.fit(data[0, 0, 0])
assert_equal(dtifit.fa < 0.5, True)
assert_equal(dtifit.fa < 0.9, True)

This comment has been minimized.

@Garyfallidis

Garyfallidis Dec 12, 2014

Member

Why did you change the value here from 0.5 to 0.9?

This comment has been minimized.

@arokem

arokem Dec 12, 2014

Member

Because I changed the data used. This was previously using the b0-less
dsivoxels.

On Fri, Dec 12, 2014 at 10:51 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/tests/test_dti.py
#498 (diff):

 dm = dti.TensorModel(gtab, 'LS')
 dtifit = dm.fit(data[0, 0, 0])
  • assert_equal(dtifit.fa < 0.5, True)
  • assert_equal(dtifit.fa < 0.9, True)

Why did you change the value here from 0.5 to 0.9?


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/498/files#r21764521.

This comment has been minimized.

@MrBago

MrBago Dec 12, 2014

Contributor

Should we change this back, I'd like to test with at least 1 b0-less data set.

This comment has been minimized.

@arokem

arokem Dec 12, 2014

Member

How about testing both?

#501

On Fri, Dec 12, 2014 at 2:42 PM, MrBago notifications@github.com wrote:

In dipy/reconst/tests/test_dti.py
#498 (diff):

 dm = dti.TensorModel(gtab, 'LS')
 dtifit = dm.fit(data[0, 0, 0])
  • assert_equal(dtifit.fa < 0.5, True)
  • assert_equal(dtifit.fa < 0.9, True)

Should we change this back, I'd like to test with at least 1 b0-less data
set.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/498/files#r21778464.

# These are the actually interesting (non-zero d_sig and non-zero b0):
idx_to_fit = so.intersect1d(nz_s0, nz_d_sig)
to_fit = data_in_mask[idx_to_fit]
to_fit = np.maximum(to_fit, min_signal)

This comment has been minimized.

@Garyfallidis

Garyfallidis Dec 12, 2014

Member

Okay so now your strategy is to make sure that no signal is less that min_signal in the data. And if it is less than min_signal you make it equal.

This comment has been minimized.

@arokem

arokem Dec 12, 2014

Member

Yes - I came to the conclusion that would be the best way about it. In
particular, otherwise some voxels might have a single zero somewhere in
their signal (one of our tests had that), and these would raise an error in
the SVD because log(signal) would evaluate to -inf.

BTW - could you hold off on merging this for another hour or so? I need to
check in with Bago about something, before we go through with this. I'll
let you know when you have the all-clear on this. Sorry about this -
refactoring here has uncovered some unexpected bumps.

On Fri, Dec 12, 2014 at 10:58 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/dti.py
#498 (diff):

  •        e_s += " positive."
    
  •        raise ValueError(e_s)
    
  •    max_d = self.kwargs.get('max_d', 0.003)
    
  •    # preallocate:
    
  •    params_in_mask = np.zeros((data_in_mask.shape[0], 12))
    
  •    params_in_mask[:, 3:] = np.eye(3).ravel()
    
  •    # We'll start by dealing with voxels that have no signal (s0 and d_sig
    
  •    # are both all equal to 0), and with voxels with very large diffusivity
    
  •    # (s0 is non-zero, but d_sig is all 0):
    
  •    nz_s0 = np.nonzero(data_in_mask[..., self.gtab.b0s_mask])[0]
    
  •    nz_d_sig = np.nonzero(data_in_mask[..., ~self.gtab.b0s_mask])[0]
    
  •    # These are the actually interesting (non-zero d_sig and non-zero b0):
    
  •    idx_to_fit = so.intersect1d(nz_s0, nz_d_sig)
    
  •    to_fit = data_in_mask[idx_to_fit]
    
  •    to_fit = np.maximum(to_fit, min_signal)
    

Okay so now your strategy is to make sure that no signal is less that
min_signal in the data.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/498/files#r21765090.

This comment has been minimized.

@Garyfallidis

Garyfallidis Dec 12, 2014

Member

Sure, no problem. See also my latest post.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 12, 2014

Hi @arokem, this looks correct. But this change raises an important concern. If you are calling peaks_from_model with a TensorModel as an option you cannot change the min_signal value and for this purpose you are stuck with the default value.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 12, 2014

For this purpose and until we get a new replacement for peaks_from_models (probably something like peaks_from_fits) you may consider adding back the min_signal option in the TensorModel and transfer it to the fit method through a self.min_signal variable. But that's the only thing you need to change. Let me know what you think.

@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

OK - talked to bago - I think we have a solution to all the concerns (and a
much simplified one...). Including dealing with this issue that you raise -
we both agreed with what you suggest.

On Fri, Dec 12, 2014 at 11:14 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

For this purpose and until we get a new replacement for peaks_from_models
(probably something like peaks_from_fits) you may consider adding back the
min_signal option in the TensorModel and transfer it to the fit method
through a self.min_signal variable. But that's the only thing you need to
change. Let me know what you think.


Reply to this email directly or view it on GitHub
#498 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 12, 2014

Cool to hear. Show me the code! 👯

arokem added some commits Dec 12, 2014

RF: simplify this, move setting of `min_signal` to __init__ function.
Almost there - still need to deal with that hard-coded minimum for cases where
all the signal is 0. Need to think about that.
@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

It's there! Unless I am missing something, I think this is very close.

The one remaining thing is what to set min_signal to when the data is
all-zeros. A rather theoretical possibility (what's the point of doing
that?), but still. @MrBago - I think that we missed that possibility in our
conversation. What do you think? Maybe raise an informative error when that
happens? Also, not that I am not copying the data here, because I am
looking at data_in_mask, which I believe should already be a copy, so I
thought we could avoid making one more...

On Fri, Dec 12, 2014 at 11:35 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Cool to hear. Show me the code! [image: 👯]


Reply to this email directly or view it on GitHub
#498 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

"... note that I am not copying... "

On Fri, Dec 12, 2014 at 11:49 AM, Ariel Rokem arokem@gmail.com wrote:

It's there! Unless I am missing something, I think this is very close.

The one remaining thing is what to set min_signal to when the data is
all-zeros. A rather theoretical possibility (what's the point of doing
that?), but still. @MrBago - I think that we missed that possibility in our
conversation. What do you think? Maybe raise an informative error when that
happens? Also, not that I am not copying the data here, because I am
looking at data_in_mask, which I believe should already be a copy, so I
thought we could avoid making one more...

On Fri, Dec 12, 2014 at 11:35 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Cool to hear. Show me the code! [image: 👯]


Reply to this email directly or view it on GitHub
#498 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 12, 2014

Okay I am waiting for Travis and then will merge this. Agreed @arokem or should I wait a bit more? You can always add more checks in another PR.

@arokem

This comment has been minimized.

Member

arokem commented Dec 12, 2014

Yeah - seems OK to me. That hard-coded constant is so theoretical, that we
can deal with it in a separate PR, if needed.

On Fri, Dec 12, 2014 at 12:19 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Okay I am waiting for Travis and then will merge this. Agreed @arokem
https://github.com/arokem or should I wait a bit more? You can always
add more checks in another PR.


Reply to this email directly or view it on GitHub
#498 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Dec 12, 2014

Cool, thx!

Garyfallidis added a commit that referenced this pull request Dec 12, 2014

@Garyfallidis Garyfallidis merged commit 78b0b1c into nipy:master Dec 12, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details

@arokem arokem referenced this pull request Jan 4, 2015

Closed

ValueError in RESTORE #439

@arokem arokem referenced this pull request Jun 30, 2015

Merged

DKI fitting (DKI PR2) #664

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment