Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IVIM #1058

Merged
merged 96 commits into from Sep 27, 2016
Merged

IVIM #1058

Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
11b2700
Initial ivim simulations and tests
May 25, 2016
951c19f
fixes for tests
May 25, 2016
5e19275
testing with Federau values
May 26, 2016
b8570fb
initial guess added
May 26, 2016
49eaad3
Removed redundant imports
May 30, 2016
449f119
Added guess parameters as keyword to fit method
May 30, 2016
1ce3c3d
Added tests for multivoxel fit and guess
May 30, 2016
c955cab
pass S0 to multi_tensor
May 30, 2016
834c2da
Replaced nlls with leastsq, en route to two stage fit
Jun 2, 2016
00a485d
fixed import error
Jun 2, 2016
0cae14b
minimize as the fit function
Jun 2, 2016
3f23734
Added bounds and other params for fit
Jun 4, 2016
f7e7823
leastsq is back
Jun 5, 2016
f51b297
Added tolerances for minimize
Jun 9, 2016
02c3f2c
Implemented two stage fitting
Jun 9, 2016
d65b1f8
Tests for two stage, optimize omitted
Jun 9, 2016
41820ac
Generate b vectors using disperse_charges
Jun 2, 2016
4bac6e9
Fetcher for ivim data, needs md5
Jun 12, 2016
ae90bd1
x0 should be flattened
quantshah Jun 14, 2016
927e981
Added read_ivim to __init__
quantshah Jun 15, 2016
762c34e
Example for the ivim module
quantshah Jun 15, 2016
bd59df4
Use dipy.core.optimize for optimization
quantshah Jun 15, 2016
fc44889
added f guess
quantshah Jun 15, 2016
27bb444
Some edits to test parameters
quantshah Jun 15, 2016
f8a89a4
f_guess = 1 - S/S_
quantshah Jun 15, 2016
c74ad47
Resolved conflict in generate_bvecs
Jun 20, 2016
78ba6ff
One stage fitting works with optimize
Jun 20, 2016
84d302d
Error function returns only residual, sq in minimize
Jun 20, 2016
38f4bf0
D* guess is 10 timens D, added S0 estimation from dti
Jun 20, 2016
230ce3e
Added predict and test for predict
Jun 20, 2016
5437825
Guess f using DTI
Jun 21, 2016
de22db5
Tests for two stage
Jun 21, 2016
a431db3
f guess update
Jun 22, 2016
f6a9574
pep-8 for predict
quantshah Jun 23, 2016
fe0c406
Merge branch 'master' of https://github.com/nipy/dipy into ivim_dev
quantshah Jun 23, 2016
93ba045
Upper bound for S0 is not 1
quantshah Jun 24, 2016
61e4ac5
Use generate_bvecs from gradients
quantshah Jun 24, 2016
4826103
Flatten x0 and add guess for S0 as S(b=0)
quantshah Jun 24, 2016
7951e6b
S0 values according to real data
quantshah Jun 24, 2016
7885c7f
Separate func for D_guess
quantshah Jun 24, 2016
7bae1a4
Changed test parameters
quantshah Jun 24, 2016
d1c933b
Added maxiter for scipy version less than 0.12
quantshah Jun 27, 2016
e08d62f
Updated fetcher init for better coverage
quantshah Jun 27, 2016
4a9c1a5
jac keyword argument should be None
quantshah Jun 28, 2016
0407072
Added Jacobian
quantshah Jun 28, 2016
8118407
Corrected the -ve sign in derv
quantshah Jun 28, 2016
f581f3e
Added the test with Jacobian
quantshah Jun 28, 2016
2c8f244
Updated examples, using two_stage and gray colormap
quantshah Jun 29, 2016
d56e525
Updating documentation
quantshah Jul 1, 2016
200162e
Added options keyword
quantshah Jul 6, 2016
ebea630
Pre allocation of result array
quantshah Jul 6, 2016
a352e9a
Added fit statistics
quantshah Jul 7, 2016
a9f88e7
Changed fitting algorithm
quantshah Jul 7, 2016
40e5974
Removed minimize, improved Jacobian and two-stage guess
quantshah Jul 10, 2016
24f6240
Relative import to absolute
quantshah Jul 12, 2016
93fd782
S0 normalization added
quantshah Jul 12, 2016
655b3d4
Updated doc for test and removed unnecesary func in test
quantshah Jul 12, 2016
88a8464
One parameter for all and doc updates
quantshah Jul 12, 2016
7f9ddea
Updated example
quantshah Jul 12, 2016
a2be743
Added basic Jacobian test and fixed pep8
quantshah Jul 12, 2016
295dd88
least_squares for scipy==0.17
quantshah Jul 13, 2016
3d56fe6
Updated the Jacobian
quantshah Jul 13, 2016
ebba0f2
All initializations in IvimModel, updated examples
quantshah Jul 19, 2016
89efc60
Optimized the Jacobian function
quantshah Jul 19, 2016
af11496
Bounds for Scipy 0.17, test and example updated
quantshah Jul 21, 2016
6202f99
Refactoring of checks in init and more tests
quantshah Jul 21, 2016
8af6ef7
Pre initialize Jacbian array, lower thresholds
quantshah Jul 21, 2016
4d8b760
Documentation updates
quantshah Jul 21, 2016
3babc2c
Removed Jac, one stage, updated docs and example
quantshah Jul 21, 2016
89c39ae
bounds on f
quantshah Jul 21, 2016
00c0201
Major refactoring, used multi_voxel decorator, updated tests
quantshah Jul 22, 2016
4fe12ef
Moved all fitting in one class under fit
quantshah Jul 22, 2016
283d166
Tests for S0=1000
quantshah Jul 22, 2016
f28c609
Polishing final documentation
quantshah Jul 22, 2016
98af7b7
docstring of IvimModel init updated
quantshah Jul 27, 2016
ac34b78
Removed x0. Estimated by 'estimate_x0'
quantshah Jul 28, 2016
3e34b1c
Check unfeasible x0 values
quantshah Jul 28, 2016
fbff13a
Updated example and documentation
quantshah Aug 1, 2016
0d2fb35
Removed execution of fetcher in docstring for test_bounds
quantshah Aug 1, 2016
a78e9d9
Added sphinx code directive
quantshah Aug 1, 2016
31a784b
pep8 fixes
quantshah Aug 1, 2016
625b6a8
Added test statement for mask and bounds
quantshah Aug 1, 2016
80efaab
Added tests for predict, check_bounds and get_item
quantshah Aug 2, 2016
7d014d6
Test for get_item and shape
quantshah Aug 2, 2016
3f99e3f
Changed ivim function to predict with gtab as parameter.
quantshah Aug 2, 2016
5da4913
Interpolation set to None
quantshah Aug 2, 2016
2f9134d
replaced data[0] with mean of data[b0_mask]
quantshah Aug 2, 2016
295f987
Added functions for getting individual parameters
quantshah Aug 2, 2016
d38feca
Updated examples
quantshah Aug 2, 2016
ca13dee
removed unnecessary import
quantshah Aug 2, 2016
d21d90f
Updated example
quantshah Aug 2, 2016
3f2bdb3
Faster tests: Defined global fits
quantshah Aug 2, 2016
a6441b2
Estimate x0 using linear log fit
quantshah Aug 3, 2016
84cce70
Added test for multiple b0 signals
quantshah Aug 3, 2016
6913b8e
Added check and test for no b0 case
quantshah Aug 3, 2016
e07a989
Set interpolation to nearest for better visualization
quantshah Aug 3, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
157 changes: 157 additions & 0 deletions dipy/reconst/ivim.py
@@ -0,0 +1,157 @@
#!/usr/bin/python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need the shebang here, because this never gets run as a module.

""" Classes and functions for fitting ivim model """
from __future__ import division, print_function, absolute_import
import warnings
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imported but not used

import functools
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imported but not used

import numpy as np
from scipy.optimize import curve_fit

from dipy.core.gradients import gradient_table
from dipy.reconst.base import ReconstModel
from dipy.reconst.dti import _min_positive_signal


def ivim_function(bvals, S0, f, D_star, D):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation?

S = np.vectorize(lambda b, S0, f, D_star, D: S0 *
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious from an implementation standpoint why you are using vectorize and lambda rather than straight up evaluation here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I was thinking of vectorizing the code and tried this initially. @arokem pointed out that right now I don't need to worry about that.

(f * np.exp(-b * D_star) + (1 - f) * np.exp(-b * D)))
return S(bvals, S0, f, D_star, D)


class IvimModel(ReconstModel):
"""Ivim model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhere in the docs, we will want to spell out what "IVIM" stands for. This might be a good place to do that.

"""

def __init__(self, gtab, fit_method="NLLS", *args, **kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to "inherit" a lot of its structure from the DTI model. I don't think that we need a the fit_method or the *args and **kwargs inputs here. In particular, you can move the code that is currently in nlls_fit into the fit method of this class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have followed the dti model and I thought that this was a good way to keep the fitting routines separate if we decide to have multiple fitting routines. If we decide to use only one routine [curve_fit or leastsq (which seems to be at the center of all fitting routines)] then I can insert the nlls_fit code here itself. I m now working on "leastsq" as a separate function similar to nlls fit. Once I complete the implementation and write the tests I can move the code in the fit method if we decide to use just one fitting routine.
What do you suggest ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a key-word argument for the split_b_val that we use to split the data for the two-stage fitting.

"""
An IVIM model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or here.


Parameters
----------
gtab : GradientTable class instance
fit_method : Non linear least squares

References
----------
.. [1] Le Bihan, Denis, et al. "Separation of diffusion
and perfusion in intravoxel incoherent motion MR
imaging." Radiology 168.2 (1988): 497-505.
.. [2] Federau, Christian, et al. "Quantitative measurement
of brain perfusion with intravoxel incoherent motion
MR imaging." Radiology 265.3 (2012): 874-881.
"""
ReconstModel.__init__(self, gtab)

if not callable(fit_method):
try:
fit_method = common_fit_methods[fit_method]
except KeyError:
e_s = '"' + str(fit_method) + '" is not a known method '
raise ValueError(e_s)
self.fit_method = fit_method # Add more fit methods (check dti)
self.args = args
self.kwargs = kwargs
self.min_signal = self.kwargs.pop('min_signal', None)
if self.min_signal is not None and self.min_signal <= 0:
e_s = "The `min_signal` key-word argument needs to be strictly"
e_s += " positive."
raise ValueError(e_s)

def fit(self, data, mask=None):
""" Fit method of the Ivim model class

Parameters
----------
data : array
The measured signal from one voxel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be signal measured in more than one voxel, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arokem The multi voxel takes care of it. While writing the documentation, I am changing this to "The measured signal." And explaining that the multivoxel decorator takes care of the rest.


mask : array
A boolean array used to mark the coordinates in the data that
should be analyzed that has the shape data.shape[:-1]

"""
if mask is None:
# Flatten it to 2D either way:
data_in_mask = np.reshape(data, (-1, data.shape[-1]))
else:
# Check for valid shape of the mask
if mask.shape != data.shape[:-1]:
raise ValueError("Mask is not the same shape as data.")
mask = np.array(mask, dtype=bool, copy=False)
data_in_mask = np.reshape(data[mask], (-1, data.shape[-1]))

if self.min_signal is None:
min_signal = _min_positive_signal(data)
else:
min_signal = self.min_signal

data_in_mask = np.maximum(data_in_mask, min_signal)
# Lookup design matrix for vectorization ?
params_in_mask = self.fit_method(data_in_mask, self.gtab)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Insert the nlls_fit code here.


if mask is None:
out_shape = data.shape[:-1] + (-1, )
ivim_params = params_in_mask.reshape(out_shape)
else:
ivim_params = np.zeros(data.shape[:-1] + (4,))
ivim_params[mask, :] = params_in_mask

return IvimFit(self, ivim_params)


class IvimFit(object):

def __init__(self, model, model_params):
""" Initialize a IvimFit class instance.
Parameters
----------
The model parameters are S0, f, D, D_star
"""
self.model = model
self.model_params = model_params

def __getitem__(self, index):
model_params = self.model_params
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is not currently being used in the tests. Please write a test that uses it. Be sure to test the corner cases (e.g., that an IndexError gets raised for invalid indices). Be sure to test this both in the single and multi voxel case.

N = model_params.ndim
if type(index) is not tuple:
index = (index,)
elif len(index) >= model_params.ndim:
raise IndexError("IndexError: invalid index")
index = index + (slice(None),) * (N - len(index))
return type(self)(self.model, model_params[index])

@property
def shape(self):
return self.model_params.shape[:-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not currently being tested. Please write a test that checks the shape attribute.



def nlls_fit(data, gtab, jac=False):
"""
Fit the ivim params using non-linear least-squares.

Parameters
----------
data : array ([X, Y, Z, ...], g)
Data or response variables holding the data. Note that the last
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what "response variables holding the data" means

dimension should contain the data. It makes no copies of data.

jac : bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably drop the jac key-word argument. I don't think that curve_fit allows passing a Jacobian matrix anyway

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we replace curve_fit with another function (e.g., scipy.optimize.leastsq), it might be worth figuring out what the Jacobian function is. It might speed up the calculations quite a bit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I think it would speed it up quite a bit. I'm not sure which optimization routine is best. I think scipy.optimize.leastsq makes it easy to try a few though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I replace the current fitting routine with leastsq or have a different fitting function and let the user chose the fit_method ? (for example "nlls" or "least_sq")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think leastsq is the most flexible one. I believe one of the inputs is the algorithm so maybe the user should just input that (and have a reasonable default)?

Copy link
Contributor Author

@quantshah quantshah May 30, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@etpeterson
From https://python4mpia.github.io/fitting_data/least-squares-fitting.html

curve_fit is part of scipy.optimize and a wrapper for scipy.optimize.leastsq that overcomes its poor usability. Like leastsq, curve_fit internally uses a Levenburg-Marquardt gradient method (greedy algorithm) to minimise the objective function.

Similarly from https://lmfit.github.io/lmfit-py/model.html

With scipy, such problems are commonly solved with scipy.optimize.curve_fit, which is a wrapper around scipy.optimize.leastsq.

Curve_fit uses leastsq and allows for more flexibility. So having two fitting routines (curve_fit and leastsq) might be redundant.

Also, the type of noise in IVIM data is Gaussian I presume.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd welcome input from others on this but looking at the documentation here http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html I'd say that minimize looks to be the most mature and flexible routine. I think the big question is which algorithm (lm, dogbox, BFGS, etc) is best (fast and robust) for fitting these data. I think something with bounds and the possibility of a jacobian are the major requirements. I think for now pick one that allows for that and make it work. Once it's working we may want to explore other solvers to see if they're faster or better.

Actually IVIM (and all MRI data) are rician. Just see the add_noise function, it defaults to rician because of that. https://github.com/nipy/dipy/blob/master/dipy/sims/voxel.py#L76. But don't worry about that, assuming the noise is gaussian is fairly safe in most cases and definitely a good place to start. If you're curious try running your tests with gaussian vs rician noise and see how it changes the fit.

Use the Jacobian? Default: False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to document all the parameters of this function

Returns
-------
nlls_params: the S0, f, D_star, D value for each voxel.

"""
flat_data = data.reshape((-1, data.shape[-1]))
# Flatten for the iteration over voxels:
bvals = gtab.bvals
ivim_params = np.empty((flat_data.shape[0], 4))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you're letting curve_fit use ones for the initial guess. The initial guess should be reasonable values, like approximate values from the a paper for example. It also would be nice for the user to send these in if they choose. This could also be why you're getting overflows.

for vox in range(flat_data.shape[0]):
popt, pcov = curve_fit(ivim_function, bvals, flat_data[vox])
ivim_params[vox, :4] = popt

ivim_params.shape = data.shape[:-1] + (4,)
return ivim_params


common_fit_methods = {'NLLS': nlls_fit}
56 changes: 56 additions & 0 deletions dipy/reconst/tests/test_ivim.py
@@ -0,0 +1,56 @@
""" Testing DTI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops. Please change the docstring.


"""
from __future__ import division, print_function, absolute_import

import numpy as np
from nose.tools import (assert_true, assert_equal,
assert_almost_equal, assert_raises)
import numpy.testing as npt
from numpy.testing import (assert_array_equal, assert_array_almost_equal,
assert_)
import nibabel as nib

import scipy.optimize as opt

from dipy.reconst import ivim as ivim
from dipy.reconst.ivim import ivim_function
from dipy.data import get_data, dsi_voxels, get_sphere
import dipy.core.gradients as grad
from dipy.sims.voxel import single_tensor
from dipy.io.gradients import read_bvals_bvecs
from dipy.core.gradients import (gradient_table, GradientTable,
gradient_table_from_bvals_bvecs,
reorient_bvecs)
from dipy.sims.voxel import multi_tensor


def test_nlls_fit():
"""
Test the implementation of NLLS
"""
fimg, fbvals, fbvecs = get_data('small_101D')
bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs)
gtab = gradient_table(bvals, bvecs)

f, D_star, D = 90, 0.0015, 0.0008

mevals = np.array(([D_star, D_star, D_star], [D, D, D]))
# This gives an isotropic signal

signal = multi_tensor(gtab, mevals, snr=None, fractions=[f, 100 - f])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want f to be between 0 and 1, at least that's how you defined it in ivim_function. That may be why you're getting overflow errors in the fitting. Not that 0 to 100 is wrong, it just needs to be consistent.

Also, f (perfusion fraction) should be around 10%, not 90%. It should fit it fine but it's just not the typical brain composition to have 90% of the signal flowing and 10% stationary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes. I am changing it to stay consistent. In multi_tensor it is taken as 0 - 100. Should I follow that or use 0 - 1 for "f" and while passing to multi_tensor do a fractions = [100*f, 100*(1-f)] @arokem. What is dipy's take on this ?

@etpeterson Thanks for pointing it out. But even after changing it, the overflow errors persist. Can it be because the data and bvalues not what it is supposed to be for this model. (I am using 'small_101D', I 'll take a look at the data Eric posted)

I think we might need tests to determine the range of f, D_star and D such that results are valid while using this model. Incorrect parameters give a very bad fit.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like dipy may be a little conflicted also. Single_tensor uses S0=1 and multi_tensor uses S0=100, though others seem to prefer the 100 area.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixes to make things more consistent would be most welcome!

On Thu, May 26, 2016 at 9:18 AM, Eric Peterson notifications@github.com
wrote:

In dipy/reconst/tests/test_ivim.py
#1058 (comment):

+def test_nlls_fit():

  • """
  • Test the implementation of NLLS
  • """
  • fimg, fbvals, fbvecs = get_data('small_101D')
  • bvals, bvecs = read_bvals_bvecs(fbvals, fbvecs)
  • gtab = gradient_table(bvals, bvecs)
  • f, D_star, D = 90, 0.0015, 0.0008
  • mevals = np.array(([D_star, D_star, D_star], [D, D, D]))
  • This gives an isotropic signal

  • signal = multi_tensor(gtab, mevals, snr=None, fractions=[f, 100 - f])

Looks like dipy may be a little conflicted also. Single_tensor uses S0=1
and multi_tensor uses S0=100, though others seem to prefer the 100 area.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/1058/files/951c19fe208c5d54250cf7088c16b962fad29d51#r64775504

data = signal[0]

ivim_model = ivim.IvimModel(gtab)
ivim_fit = ivim_model.fit(data)

S0_est, f_est, D_star_est, D_est = ivim_fit.model_params
est_signal = ivim.ivim_function(bvals,
S0_est,
f_est,
D_star_est,
D_est)

assert_equal(est_signal.shape, data.shape)
assert_array_almost_equal(est_signal, data)
assert_array_almost_equal(ivim_fit.model_params, [S0, f, D_star, D])