Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Nipype-less implementation (refactor) #16

Merged
merged 21 commits into from Apr 8, 2021
Merged

ENH: Nipype-less implementation (refactor) #16

merged 21 commits into from Apr 8, 2021

Conversation

oesteban
Copy link
Member

@oesteban oesteban commented Mar 19, 2021

Setting some foundations for the code:

  • The DWI object to encapsulate inputs (please let me know if it encloses all we will possibly need).
  • Stubb to start building the core of the implementation, defining some sort of API.

UPDATE:

This is how it looks right now:

>>> from emc import dmri
>>> from emc.estimator import EddyMotionEstimator
>>> data = dmri.load(
...     "ds001771/sub-36/dwi/sub-36_acq-AP_dwi.nii.gz",
...     gradients_file="rasb.tsv",
...     b0_file="sub-36_acq-PA_dwi_b0_mcf_ref.nii.gz",
...     brainmask_file="mask.nii.gz"
...     )
>>> EddyMotionEstimator.fit(data, model="b0")
Processing b-index <19> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpedeh8iiv>
Processing b-index <35> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmppb9tb2cz>
Processing b-index <54> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpe9yk0dez>
Processing b-index <98> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp0scj6vxi>
Processing b-index <65> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpgeah9t8a>
Processing b-index <0> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpmjqioqk4>
Processing b-index <47> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp3vqbgpvm>
Processing b-index <60> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp2exvtr9t>
Processing b-index <45> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmphljr6ty6>
Processing b-index <27> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp64ygidbx>
Processing b-index <39> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpqf8xfcf6>
Processing b-index <97> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpw6f574ft>
Processing b-index <24> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmphrds6nq9>
Processing b-index <87> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpjmja70vg>
Processing b-index <76> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmplflxbnov>
Processing b-index <16> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpezw74z4b>
Processing b-index <25> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpfx5a80rl>
Processing b-index <71> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp0mp13_we>
Processing b-index <22> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp6eqn7q0o>
Processing b-index <10> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp44f58w84>
Processing b-index <15> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpixktyl6w>
Processing b-index <53> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp2u3ayj_9>
Processing b-index <48> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpzbsozmz7>
Processing b-index <61> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp7034ovrf>
Processing b-index <20> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpmvscnyvg>
Processing b-index <46> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpczdgmj4z>
Processing b-index <2> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpvrmfs29_>
Processing b-index <90> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpi28jxzkz>
Processing b-index <18> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp_j8qvw2h>
Processing b-index <69> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmp_ufs8o7v>
Processing b-index <40> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpf2oom5r2>
Processing b-index <6> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmpgjescu1p>
Processing b-index <28> in </var/folders/l9/0lkn3g4s27bgkk75n6jj778r0000gp/T/tmppk70ev6x>
31%|█████████████████                              | 32/102 [11:24<32:56, 28.24s/it]

@oesteban oesteban linked an issue Mar 19, 2021 that may be closed by this pull request
@dPys dPys added the enhancement New feature or request label Mar 19, 2021
@dPys dPys requested review from dPys and arokem March 19, 2021 17:55
Copy link
Collaborator

@arokem arokem left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! One general point is that I think that this probably ends up looking more like sklearn unsupervised estimators. For example, note that the predict method takes no input. This means that it is probably more akin to the sklearn transform methods in unsupervised estimators. And we can add a fit_transform that does both. Other small comments spread throughout.

emc/dmri.py Outdated
class DWI:
"""Data representation structure for dMRI data."""

dataobj = attr.ib(default=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this name follow a specific convention? Maybe better to call it dwdata to be more explicit that these are the diffusion weighted images?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nibabel gives access to the numpy array through dataobj, but it is not required, honestly.

emc/dmri.py Show resolved Hide resolved
emc/dmri.py Show resolved Hide resolved
emc/estimator.py Outdated Show resolved Hide resolved
emc/estimator.py Outdated Show resolved Hide resolved
emc/estimator.py Outdated Show resolved Hide resolved
emc/estimator.py Outdated
gradients = attr.ib(default=None)
"""A 2D numpy array of the gradient table in RAS+B format."""
bzero = attr.ib(default=None)
"""A *b=0* reference map, preferably obtained by some smart averaging."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a path or an array? I would advocate for either.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(numpy array)

emc/estimator.py Outdated Show resolved Hide resolved
@oesteban
Copy link
Member Author

I hadn't assigned review because it felt premature - but this feedback is super-welcome. Thanks for your time, I will move to the better API tomorrow.

@oesteban
Copy link
Member Author

@arokem I'm getting this:

>>> EddyMotionEstimator.fit(data)
Optimizing level 0 [max iter: 100]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-35-e34da2fc4915> in <module>
----> 1 EddyMotionEstimator.fit(data)

~/workspace/EddyMotionCorrection/emc/estimator.py in fit(dwdata, n_iter, align_kwargs, model, seed, **kwargs)
     99
    100                 init_affine = dwdata.em_affines[i] if dwdata.em_affines else np.eye(4)
--> 101                 xform = registration.optimize(
    102                     predicted,  # fixed
    103                     data_test[0],  # moving

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in optimize(***failed resolving arguments***)
   1099                 self.options['maxiter'] = max_iter
   1100
-> 1101             opt = Optimizer(self.metric.distance_and_gradient,
   1102                             self.params0,
   1103                             method=self.method, jac=True,

~/.anaconda/lib/python3.8/site-packages/dipy/core/optimize.py in __init__(self, fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options, evolution)
    127         else:
    128
--> 129             res = minimize(fun, x0, args, method, jac, hess, hessp, bounds,
    130                            constraints, tol, callback, options)
    131

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
    615                                   **options)
    616     elif meth == 'l-bfgs-b':
--> 617         return _minimize_lbfgsb(fun, x0, args, jac, bounds,
    618                                 callback=callback, **options)
    619     elif meth == 'tnc':

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/lbfgsb.py in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, finite_diff_rel_step, **unknown_options)
    304             iprint = disp
    305
--> 306     sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
    307                                   bounds=new_bounds,
    308                                   finite_diff_rel_step=finite_diff_rel_step)

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess)
    259     # ScalarFunction caches. Reuse of fun(x) during grad
    260     # calculation reduces overall function evaluations.
--> 261     sf = ScalarFunction(fun, x0, args, grad, hess,
    262                         finite_diff_rel_step, bounds, epsilon=epsilon)
    263

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in __init__(self, fun, x0, args, grad, hess, finite_diff_rel_step, finite_diff_bounds, epsilon)
     74
     75         self._update_fun_impl = update_fun
---> 76         self._update_fun()
     77
     78         # Gradient evaluation

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in _update_fun(self)
    164     def _update_fun(self):
    165         if not self.f_updated:
--> 166             self._update_fun_impl()
    167             self.f_updated = True
    168

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in update_fun()
     71
     72         def update_fun():
---> 73             self.f = fun_wrapped(self.x)
     74
     75         self._update_fun_impl = update_fun

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in fun_wrapped(x)
     68         def fun_wrapped(x):
     69             self.nfev += 1
---> 70             return fun(x, *args)
     71
     72         def update_fun():

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in __call__(self, x, *args)
     72     def __call__(self, x, *args):
     73         """ returns the the function value """
---> 74         self._compute_if_needed(x, *args)
     75         return self._value
     76

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in _compute_if_needed(self, x, *args)
     66         if not np.all(x == self.x) or self._value is None or self.jac is None:
     67             self.x = np.asarray(x).copy()
---> 68             fg = self.fun(x, *args)
     69             self.jac = fg[1]
     70             self._value = fg[0]

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in distance_and_gradient(self, params)
    783         """
    784         try:
--> 785             self._update_mutual_information(params, True)
    786         except (AffineInversionError, AffineInvalidValuesError):
    787             return np.inf, 0 * self.metric_grad

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in _update_mutual_information(self, params, update_gradient)
    670
    671         # Update the histogram with the current joint intensities
--> 672         static_values, moving_values = self._update_histogram()
    673
    674         H = self.histogram  # Shortcut to `self.histogram`

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in _update_histogram(self)
    631             pts = sp_to_moving.dot(self.samples.T).T  # Points on moving grid
    632             pts = pts[..., :self.dim]
--> 633             self.moving_vals, inside = self.interp_method(self.moving, pts)
    634             self.moving_vals = np.array(self.moving_vals)
    635             static_values = self.static_vals

dipy/core/interpolation.pyx in dipy.core.interpolation.__pyx_fused_cpdef()

TypeError: No matching signature found

@oesteban
Copy link
Member Author

Okay, nevermind for now @arokem.

I have moved on to antsRegistration for the following reasons:

  • Easier to read for newbies (thinking of the tutorial)
  • Easier to understand for newbies that have already used ANTs or at least know about it (not a dismissable population)
  • Easier for me (I use it quite often)
  • Easier for debugging (we can focus less on the registration problem)
  • Dividing responsibilities (we do not depend fully on DIPY, and maybe make the move with a more stable interface)
  • ANTs' finesse - there are a few interesting points in ants 2.3 that I don't think DIPY implements (e.g., you can play with several masks, one per level).

@oesteban oesteban marked this pull request as ready for review March 24, 2021 16:05
@oesteban
Copy link
Member Author

oesteban commented Mar 24, 2021

Okay, I think we can get this merged (when reviews show a thumbs up) and then see where we go from here (basically, use a non-dummy prediction model)

@arokem
Copy link
Collaborator

arokem commented Mar 24, 2021

This does significantly change the objectives of the software, which was a pure python implementation of these ideas.

Regarding API/usage: I think we can sort that out. Making it easier for newbies should be a matter of writing clearer APIs. That should also help you as a developer, I believe.

Regarding finesse: if these features are important for performance, we can implement this in Python.

Regarding division of responsibility: I am not sure I follow. What's the issue with relying on DIPY?

@arokem
Copy link
Collaborator

arokem commented Mar 24, 2021

I guess one way we could go about this is to merge this now, and then it would be up to me to refactor this out later on. I'd be OK with that.

@arokem
Copy link
Collaborator

arokem commented Mar 24, 2021

One issue for the tutorial (and in general) : how do you expect users to install the software? A pip install won't work anymore, because you can't pip install ANTS. Can it be conda installed? Or does it have to be docker? Are you assuming ISBI tutorial attendees have docker installed and the permissions to run it on the machine that they are using for the tutorial?

@oesteban
Copy link
Member Author

Yes, I agree this raises a whole lot of questions and I also want to see this being python-only.

Conda could be a good trade-off for installation until that happens.

For the tutorial, a docker image should do.

@oesteban
Copy link
Member Author

Re: division of responsibilities

If we feel there are some pieces of code that would make sense to upstream into DIPY, it will be easier to do if we focus all the attention (about DIPY) on the models. When that bit is rock-solid, we can then proceed with the registration part. Also, getting good results with ants can be the best way to set a reference quality for DIPY's implementation.

@oesteban
Copy link
Member Author

I would like to merge this asap - may I ask for a final round of reviews? @arokem @dPys @sebastientourbier

emc/estimator.py Outdated Show resolved Hide resolved
emc/model.py Show resolved Hide resolved
emc/model.py Outdated Show resolved Hide resolved
Copy link
Contributor

@dPys dPys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Such incredible progress on this! I think users are really going to love the sklearn-style fit, transform, predict methods. I left a few comments and a suggested enhancement, but otherwise just very excited to see this tool maturing, especially with the "Oscar-touch" :-)


with h5py.File(filename, "w") as out_file:
out_file.attrs["Format"] = "EMC/DWI"
out_file.attrs["Version"] = np.uint16(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add more metadata to this? Now that we're moving towards HDF5 and away from MapNodes, It would be insanely helpful to tag each image file that gets written out with its provenance (is this all volumes?, DWI with or without its B0 volume(s)?, masked or unmasked? raw, transformed or predicted? etc. If it's transformed, is it an eddy-corrected transformation or an eddy->predicted transformation. If it was predicted, what base model was used to predict it? Coupled with the rasb gradient information as well, it might be very helpful to have this class automatically track the indices of individual volumes of the images themselves. WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this file contains the original dataset. I'm still unsure of the I/O because the current implementation allows you to do some funky stuff (e.g., if you change the _filepath attribute, the behavior is easily problematic because you could potentially store a realigned dataset and that's not what the estimator is expecting).

But yes, happy to grow from here.

emc/estimator.py Outdated Show resolved Hide resolved
emc/estimator.py Outdated Show resolved Hide resolved
emc/dmri.py Show resolved Hide resolved
emc/model.py Outdated Show resolved Hide resolved

param = {
"isotropic": ExponentialIsotropicModel,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we go ahead and add alpha + lambda to this list?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's get this merged and sure, on top of this PR we can definitely do that.

index_order = np.arange(len(dwdata))
np.random.shuffle(index_order)
with tqdm(total=len(index_order), unit="dwi") as pbar:
for i in index_order:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about starting out with just joblib?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would parallelize whole orientations. @sebastientourbier is preparing the parallelization of blocks within any given orientation. The idea here is to be able to benefit from previous alignments in the latter alignments.

# read original DWI data & b-vector
with h5py.File(self._filepath, "r") as in_file:
root = in_file["/0"]
dwframe = np.asanyarray(root["dataobj"][..., index])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious -- why dwframe and bframe vs. just test_data and test_gradients?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_data and test_gradients imply there is some cross-validation going on. Because this is just a data representation object, it seemed more adequate not to refer to the framework you are working on.


train_data = self.dataobj[..., ~mask]
train_gradients = self.gradients[..., ~mask]

Copy link
Contributor

@dPys dPys Mar 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think it would still be good to have some mechanism for further filtering which gradient samples are included for model training and testing upon each iteration. On the one hand, there are those samples that are too close in proximity along the sphere to those which we are trying to predict and therefore risk unfavorably biasing the predictions. But even if that's still a matter of debate, there may simply be outlier volumes or gradients that we wish to manually exclude. For that reason alone, I think it would still be beneficial to assume some additional level of control over which samples get used, no?

On that note, I do think that a threshold for filtering out training samples based on q-space proximity should itself be adaptive since I'd assume that the extent of this bias will vary as a function of model choices and sampling scheme (e.g. we should also think about what might happen here in the half-sphere case and how best to accommodate...).

But for now, adding some functionality that does something like the following might be a good starting point:

def train_gradient_filter(
    test_gradients, train_gradients, cutoff=2.0
):
    train_bvec = np.asanyarray(train_gradients[:, 0:3])
    train_bval = np.asanyarray(train_gradients[:, 3])
    test_bvec = np.asanyarray(test_gradients[:, 0:3])
    test_bval = np.asanyarray(test_gradients[:, 3])
    
    min_bval = min(min(train_bval), test_bval)
    all_qvals = np.sqrt(train_bval - min_bval)
    prediction_qval = np.sqrt(test_bval - min_bval)

    # Convert q values to percent of maximum qval
    max_qval = max(max(all_qvals), prediction_qval)
    all_qvals_scaled = all_qvals / max_qval * 100
    scaled_qvecs = train_bvec * all_qvals_scaled[:, np.newaxis]
    scaled_prediction_qvec = test_bvec * (prediction_qval / max_qval * 100)
    
    # Calculate the distance between the training qvecs and the testing qvec
    gradient_mask = (
        np.linalg.norm(scaled_qvecs - scaled_prediction_qvec, axis=1
                       ) > cutoff
    ) * (np.linalg.norm(scaled_qvecs + scaled_prediction_qvec, axis=1
                        ) > cutoff)

    return gradient_mask
    
gradient_mask = train_gradient_filter(test_gradients, train_gradients)
train_data = self.dataobj[..., ~gradient_mask]
train_gradients = self.gradients[..., ~gradient_mask]

And then maybe also adding an exclusion_indices argument as well for manually dropping training samples if needed.

WDYT @oesteban ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

therefore risk unfavorably biasing the predictions.

It is still unclear to me why the bias would be unfavorable in this case

there may simply be outlier volumes or gradients that we wish to manually exclude.

This is a chicken-and-egg problem. How do you decide what volumes should be excluded? Perhaps they'd be acceptable after head motion? I prefer to make the implementation as agnostic as possible of the rest of the pipeline. This is not to say that we should not look into this issue, but my opinion is that we can only investigate this outlier filtering after we have successfully tried the code on a lot of data. We need to measure how much it improves w.r.t. a baseline. This PR tries to set such a baseline.

Instead of thinking about solutions to hypothetical problems, let's hypothesize about the most relevant problems we expect first, then plan for a test that would evaluate solutions to the problem, and finally once we have both hypothesis and test, make decisions to improve performance.

Finally, and in particular about the code you propose:

  • I would not implement this in the DWI object because this is not partitioning the dataset, it is partioning the partition with some additional criteria. IMHO, the responsibility of this falls more on the estimator object side.
  • Let's keep this code around and test it when we have a functional prototype (i.e., a baseline as I called it before). This first implementation only looked for setting up the foundations.

Copy link
Member Author

@oesteban oesteban Apr 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For instance, one problem I do see coming at us:

Say you have 12 b=1000, 12 b=2000 and 12 b=3000. Eddy currents for the two outer shells will be much larger than those of the b=1000. Because the dataset is already pretty biased towards the outer shells, when aligning one orientation of the b=1000 shell we might actually "overcorrect" for eddy. So in this case, you might want to completely dismiss the largest b (instead of close orientations) when "predicting" the lowest b. WDYT @arokem ?

@oesteban oesteban requested a review from dPys April 7, 2021 15:33
oesteban and others added 9 commits April 7, 2021 18:11
Co-authored-by: Ariel Rokem <arokem@gmail.com>
I'm making progress on the integration with DIPY's regitration. However,
I'm hitting the following issue:

```Python
>>> EddyMotionEstimator.fit(data)
Optimizing level 0 [max iter: 100]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-35-e34da2fc4915> in <module>
----> 1 EddyMotionEstimator.fit(data)

~/workspace/EddyMotionCorrection/emc/estimator.py in fit(dwdata, n_iter, align_kwargs, model, seed, **kwargs)
     99
    100                 init_affine = dwdata.em_affines[i] if dwdata.em_affines else np.eye(4)
--> 101                 xform = registration.optimize(
    102                     predicted,  # fixed
    103                     data_test[0],  # moving

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in optimize(***failed resolving arguments***)
   1099                 self.options['maxiter'] = max_iter
   1100
-> 1101             opt = Optimizer(self.metric.distance_and_gradient,
   1102                             self.params0,
   1103                             method=self.method, jac=True,

~/.anaconda/lib/python3.8/site-packages/dipy/core/optimize.py in __init__(self, fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options, evolution)
    127         else:
    128
--> 129             res = minimize(fun, x0, args, method, jac, hess, hessp, bounds,
    130                            constraints, tol, callback, options)
    131

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
    615                                   **options)
    616     elif meth == 'l-bfgs-b':
--> 617         return _minimize_lbfgsb(fun, x0, args, jac, bounds,
    618                                 callback=callback, **options)
    619     elif meth == 'tnc':

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/lbfgsb.py in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, finite_diff_rel_step, **unknown_options)
    304             iprint = disp
    305
--> 306     sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
    307                                   bounds=new_bounds,
    308                                   finite_diff_rel_step=finite_diff_rel_step)

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in _prepare_scalar_function(fun, x0, jac, args, bounds, epsilon, finite_diff_rel_step, hess)
    259     # ScalarFunction caches. Reuse of fun(x) during grad
    260     # calculation reduces overall function evaluations.
--> 261     sf = ScalarFunction(fun, x0, args, grad, hess,
    262                         finite_diff_rel_step, bounds, epsilon=epsilon)
    263

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in __init__(self, fun, x0, args, grad, hess, finite_diff_rel_step, finite_diff_bounds, epsilon)
     74
     75         self._update_fun_impl = update_fun
---> 76         self._update_fun()
     77
     78         # Gradient evaluation

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in _update_fun(self)
    164     def _update_fun(self):
    165         if not self.f_updated:
--> 166             self._update_fun_impl()
    167             self.f_updated = True
    168

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in update_fun()
     71
     72         def update_fun():
---> 73             self.f = fun_wrapped(self.x)
     74
     75         self._update_fun_impl = update_fun

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/_differentiable_functions.py in fun_wrapped(x)
     68         def fun_wrapped(x):
     69             self.nfev += 1
---> 70             return fun(x, *args)
     71
     72         def update_fun():

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in __call__(self, x, *args)
     72     def __call__(self, x, *args):
     73         """ returns the the function value """
---> 74         self._compute_if_needed(x, *args)
     75         return self._value
     76

~/.anaconda/lib/python3.8/site-packages/scipy/optimize/optimize.py in _compute_if_needed(self, x, *args)
     66         if not np.all(x == self.x) or self._value is None or self.jac is None:
     67             self.x = np.asarray(x).copy()
---> 68             fg = self.fun(x, *args)
     69             self.jac = fg[1]
     70             self._value = fg[0]

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in distance_and_gradient(self, params)
    783         """
    784         try:
--> 785             self._update_mutual_information(params, True)
    786         except (AffineInversionError, AffineInvalidValuesError):
    787             return np.inf, 0 * self.metric_grad

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in _update_mutual_information(self, params, update_gradient)
    670
    671         # Update the histogram with the current joint intensities
--> 672         static_values, moving_values = self._update_histogram()
    673
    674         H = self.histogram  # Shortcut to `self.histogram`

~/.anaconda/lib/python3.8/site-packages/dipy/align/imaffine.py in _update_histogram(self)
    631             pts = sp_to_moving.dot(self.samples.T).T  # Points on moving grid
    632             pts = pts[..., :self.dim]
--> 633             self.moving_vals, inside = self.interp_method(self.moving, pts)
    634             self.moving_vals = np.array(self.moving_vals)
    635             static_values = self.static_vals

dipy/core/interpolation.pyx in dipy.core.interpolation.__pyx_fused_cpdef()

TypeError: No matching signature found
```
@oesteban
Copy link
Member Author

oesteban commented Apr 8, 2021

Okay, we will need to push this faster. Happy to discuss all the lines above in future PRs building on top of this one. Thanks for the feedback!

@oesteban oesteban merged commit 57c5189 into nipreps:main Apr 8, 2021
arokem pushed a commit that referenced this pull request Jun 24, 2022
ENH: Nipype-less implementation (refactor)
Former-commit-id: 57c5189
Former-commit-id: f4a2dee
arokem pushed a commit that referenced this pull request Jun 24, 2022
ENH: Nipype-less implementation (refactor)
Former-commit-id: 3622185 [formerly 57c5189]
Former-commit-id: f4a2dee
Former-commit-id: 6b1dc99
mathdugre pushed a commit to mathdugre/eddymotion that referenced this pull request Jun 24, 2022
ENH: Nipype-less implementation (refactor)
mathdugre pushed a commit to mathdugre/eddymotion that referenced this pull request Jun 24, 2022
ENH: Nipype-less implementation (refactor)
oesteban added a commit that referenced this pull request Jun 30, 2022
ENH: Nipype-less implementation (refactor)
Former-commit-id: 57c5189
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Nipype should be a wrapper, not a scaffold
4 participants