New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xval #335

Merged
merged 47 commits into from Jul 2, 2014

Conversation

Projects
None yet
4 participants
@arokem
Member

arokem commented Mar 11, 2014

This revives #235. Now that all the other elements are in place, I think that this is ready for review and in principle ready to be merged upstream (once it's been reviewed and fixed up, of course!). I am excited about this PR - evaluation of DWI models is something that I have been thinking a lot about in the past couple of years, since I started working on DWI.

This PR also demonstrates the power and elegance of the uniform Model/Fit API we have in place. GGT!

For now, this PR only implements cross-validation for CSD and for the tensor model, but future work could extend this to other models, by implementing their predict methods.

Also : a lot of PEP8 fixes in the CSD module and elsewhere :-)

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Mar 17, 2014

Wow! This is cool. Have you tested what happens when a model does not provide a predict method?
And was it really necessary to use the CSDFit rather than SHFit for prediction in csdeconv? Would it be easy to have the same prediction for all SH-like models? Just saying... :)

@arokem

This comment has been minimized.

Member

arokem commented Mar 17, 2014

I think that if it doesn't have a predict method, it should throw an error. I will implement that. Concerning the SHFit - I initially tried that, but couldn't make it work - but I agree that it would be better - I will give it another try.

@arokem

This comment has been minimized.

Member

arokem commented Mar 17, 2014

OK - these commits should answer @Garyfallidis questions. The issue with other SHM-based models is that they don't have a response function defined, so we need to fall back on a default response function (here it's a tensor with evals [0.0015, 0.0003, 0.0003]). Does this look OK to you?

@MrBago

This comment has been minimized.

Contributor

MrBago commented Mar 18, 2014

@arokem, this stuff looks awesome. I don't think generalizing the idea of a response function to all odf models is conceptually sound, but think we should try and implement predict methods for as many models/fits as possible.

I think for all our predict methods we should include a closed loop test, IE fit(predict(P)) == P, where P is some set of parameters to the of the model. I'd like to hear if you guys think there is a good reason some models cannot pass this closed loop test.

I think, both from a design point of view and a conceptual point of view, predict should defer to the model. Somehow it makes sense that the models know the best way to get from a fit or a set of parameters back to the signal. Also this will allow us to use the same fit type for multiple models like we're currently doing with the SphHarmFit. That's not to say we can't have a fit.predict method, I really like the syntax:

model = CsaOdfMode(...)
fit = model.fit(data)
pred_signal = fit.predict()
@arokem

This comment has been minimized.

Member

arokem commented Mar 18, 2014

Thanks for the feedback @MrBago!

On Mon, Mar 17, 2014 at 7:29 PM, MrBago notifications@github.com wrote:

@arokem https://github.com/arokem, this stuff looks awesome. I don't
think generalizing the idea of a response function to all odf models is
conceptually sound, but think we should try and implement predict methods
for as many models/fits as possible.

Agreed! I think that this is only the first step.

Does it make sense to have the response function in the CSA model? Right
now, the implementation of the "ODF convolved with response function" idea
is at the level of the to the base SHM model Fit object. Does that make
sense?

I think for all our predict methods we should include a closed loop test,
IE fit(predict(P)) == P, where P is some set of parameters to the of the
model. I'd like to hear if you guys think there is a good reason some
models cannot pass this closed loop test.

One reason is regularization. Some models (CSD is one example) would not do
that. As you can see in the test I implemented, CSD doesn't predict
perfectly even in a simulation when the exact response function is known
and when there is no noise. I believe this is because of the regularization
implemented in this model.

I think, both from a design point of view and a conceptual point of view,
predict should defer to the model. Somehow it makes sense that the models
know the best way to get from a fit or a set of parameters back to the
signal. Also this will allow us to use the same fit type for multiple
models like we're currently doing with the SphHarmFit. That's not to say
we can't have a fit.predict method, I really like the syntax:

model = CsaOdfMode(...)
fit = model.fit(data)
pred_signal = fit.predict()

Yeah - that's the uniform API that makes the cross-validation part of this
PR so general, that it could (in principle) be applied to any model!

But you think that this method should call out to a method of the Model
object? I am not sure that makes sense. A prediction can only really be
made from a fit, because it requires model parameters. So - a model by
itself would know something about the conceptual structure (which is why
you might need to query self.model attributes within fit.predict), but
it doesn't know how to predict, unless you provide it with parameters.
Alternatively, we could implement an alternative model.predict that takes
parameters as input and does the prediction. Is that what you were
thinking? In a way, it's similar to what we do in the sims module.


Reply to this email directly or view it on GitHubhttps://github.com//pull/335#issuecomment-37893885
.

@@ -20,6 +22,9 @@ def fit(self, data):
class OdfFit(ReconstFit):
def __init__(self, model, data):
ReconstFit.__init__(self, model, data)

This comment has been minimized.

@MrBago

MrBago Mar 20, 2014

Contributor

Shouldn't OdfFit inherit init from ReconstFit automatically? Is there a reason you're doing this kind of explicit inheritance?

data = np.copy(model)
# If the model predicts the data perfectly, the COD is all 100s:
cod = xval.coeff_of_determination(data, model)
npt.assert_array_equal(100 * np.ones(data.shape[:3]), cod)

This comment has been minimized.

@MrBago

MrBago Mar 20, 2014

Contributor

assert_array_equal will broadcast I believe

response_evals=response_evals)
if np.iterable(S0):
S0 = S0[...,None]

This comment has been minimized.

@MrBago

MrBago Mar 20, 2014

Contributor

You don't done anything with S0 after this, I suspect you meant to multiply the return value by S0 to scale the signal.

@MrBago

This comment has been minimized.

Contributor

MrBago commented Mar 21, 2014

Does it make sense to have the response function in the CSA model? Right
now, the implementation of the "ODF convolved with response function" idea
is at the level of the to the base SHM model Fit object. Does that make
sense?

We're abusing the ODF concept a little, in principle the csd model produces
an FOD not an ODF,
but for API consistency we're calling it an ODF. It doesn't may any
theoretical sense to convolve
and ODF with a response function, but I don't know if anyone has tried it.
Maybe it does something nice.

One reason is regularization. Some models (CSD is one example) would not
do
that. As you can see in the test I implemented, CSD doesn't predict
perfectly even in a simulation when the exact response function is known
and when there is no noise. I believe this is because of the
regularization
implemented in this model.

The CSD model should pass the closed loop test for well behaved signals,
meaning fit(predict(P)) == P if all(fod(P) > 0) (does that make any sense?).
I looked over your code a little and you're doing something a bit odd with
the predict matrix. The prediction for the CSD model should
be done from the model coefficients and not from an ODF. I'll comment more
inline to help with this issue.

Alternatively, we could implement an alternative model.predict that
takes
parameters as input and does the prediction. Is that what you were
thinking? In a way, it's similar to what we do in the sims module.

Yep that exactly what I mean, Ideally I'd like to be able to do something
like this:

model = SomeModel(gtab, ...)
sim = model.predict(params)

fit = model.fit(data)
fit.predict() == model.predict(fit.params)

self.model.cache_set("prediction_matrix",
sphere,
prediction_matrix)
return prediction_matrix.T

This comment has been minimized.

@MrBago

MrBago Mar 21, 2014

Contributor

This function has a few issues. First you want to predict signal from the coefficients of the csd fit. You can do this by using simply taking dot(Pmatrix, fit.shm_coeff). In this case Pmatrix == dot(SH_basis * model.R). You can take a look at the fit method to see how we get the SH-basis and model.R is the spherical harmonic representation of the response function, meaning that coef-SIG = dot(R, coef-FOD). You can re-compute R if you want, but it's already computed for every CSD model.

Keep in mind with the response function that it needs to be axially symmetric, this is taken care of by the sh_to_rh function when R is created. We've actually done something very weird with the response function, we represent it as a tensor and that's not very good. What actually happens is that an axially symmetric SH function is fit to the tensor and that's the true response function.

@@ -353,6 +357,46 @@ def fit(self, data, mask=None):
return SphHarmFit(self, coef, mask)
def _shm_predict(fit, gtab, S0=1, response_evals=None):

This comment has been minimized.

@MrBago

MrBago Mar 21, 2014

Contributor

How do feel about making this a method on the CSD model? It can take a gradient table and FOD coefficients as arguments, and it'll get the response function (R matrix) from the model.

all voxels
"""
return _shm_predict(self, gtab, S0, response_evals=None)

This comment has been minimized.

@MrBago

MrBago Mar 21, 2014

Contributor

If we make the _shm_predict a method of the CSD model this function can just pass it's coefficients to that method.

This comment has been minimized.

@arokem

arokem Mar 23, 2014

Member

I guess that I will wait for #340 to
resolve, before moving ahead with this one.

On Thu, Mar 20, 2014 at 6:24 PM, MrBago notifications@github.com wrote:

In dipy/reconst/shm.py:

  • def predict(self, gtab, S0=1, response_evals=None):
  •    """
    
  •    Predict the diffusion signal from the model coefficients.
    
  •    Parameters
    

  •    gtab : a GradientTable class instance
    
  •        The directions and bvalues on which prediction is desired
    
  •    S0 : float array
    
  •       The mean non-diffusion-weighted signal in each voxel. Default: 1 in
    
  •       all voxels
    
  •    """
    
  •    return _shm_predict(self, gtab, S0, response_evals=None)
    

If we make the _shm_predict a method of the CSD model this function can
just pass it's coefficients to that method.


Reply to this email directly or view it on GitHubhttps://github.com//pull/335/files#r10824629
.

@arokem

This comment has been minimized.

Member

arokem commented Jun 18, 2014

Alright. This is ready for a re-review. In particular, I have implemented @MrBago's recent suggestions. Please take a look when you get a chance.

return pred_sig
class ConstrainedSphericalDeconvModel(OdfModel, Cache):

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 18, 2014

Member

Hmm... why you have moved the classes at the end of the file? Please put them up again!

This comment has been minimized.

@arokem

arokem Jun 18, 2014

Member

This follows the general rule that things are used after they are defined.

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 19, 2014

Member

Well... @stefanv may have a different opinion on that. Also this rule makes sense for C code but not for Python.

I personally think that when classes are defined at the top it is easier to get to the most important parts of the code faster. The CSDModel is a good example.
But because people reviewing my code have asked me in the past to put classes'
definitions at the top and I started using this style... I saw the benefits myself and
I wanted to share this experience with you too.

This comment has been minimized.

@arokem

arokem Jun 19, 2014

Member

I've found that it's helpful to know where to look for something. If you
see something (a function, variable, etc.) being used in code, you know you
will always find it defined, allocated, or imported somewhere earlier in
the file. So, you always know which way to search.

On Wed, Jun 18, 2014 at 5:05 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/csdeconv.py:

  • if np.iterable(S0):
  •    # If it's an array, we need to give it one more dimension:
    
  •    S0 = S0[..., None]
    
  • This is the key operation: convolve and multiply by S0:

  • pre_pred_sig = S0 * np.dot(predict_matrix, sh_coeff)
  • Now put everything in its right place:

  • pred_sig = np.zeros(pre_pred_sig.shape[:-1] + (gtab.bvals.shape[0],))
  • pred_sig[..., ~gtab.b0s_mask] = pre_pred_sig
  • pred_sig[..., gtab.b0s_mask] = S0
  • return pred_sig

+class ConstrainedSphericalDeconvModel(OdfModel, Cache):

Well... @stefanv https://github.com/stefanv may have a different
opinion on that. Also this rule makes sense for C code but not for Python.

I personally think that when classes are defined at the top it is easier
to get to the most important parts of the code faster. The CSDModel is a
good example.
But because people reviewing my code have asked me in the past to put
classes'
definitions at the top and I started using this style... I saw the
benefits myself and
I wanted to share this experience with you too.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/335/files#r13947669.

This comment has been minimized.

@arokem

arokem Jun 19, 2014

Member

Either way, we need to decide on a standard and stick to it everywhere. As
it is, parts of the code (e.g. reconst.dti) adhere to one convention, and
other parts adhere to another. That's certainly more confusing than any one
of these conventions by itself.

On Wed, Jun 18, 2014 at 6:48 PM, Ariel Rokem arokem@gmail.com wrote:

I've found that it's helpful to know where to look for something. If you
see something (a function, variable, etc.) being used in code, you know you
will always find it defined, allocated, or imported somewhere earlier in
the file. So, you always know which way to search.

On Wed, Jun 18, 2014 at 5:05 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/csdeconv.py:

  • if np.iterable(S0):
  •    # If it's an array, we need to give it one more dimension:
    
  •    S0 = S0[..., None]
    
  • This is the key operation: convolve and multiply by S0:

  • pre_pred_sig = S0 * np.dot(predict_matrix, sh_coeff)
  • Now put everything in its right place:

  • pred_sig = np.zeros(pre_pred_sig.shape[:-1] + (gtab.bvals.shape[0],))
  • pred_sig[..., ~gtab.b0s_mask] = pre_pred_sig
  • pred_sig[..., gtab.b0s_mask] = S0
  • return pred_sig

+class ConstrainedSphericalDeconvModel(OdfModel, Cache):

Well... @stefanv https://github.com/stefanv may have a different
opinion on that. Also this rule makes sense for C code but not for Python.

I personally think that when classes are defined at the top it is easier
to get to the most important parts of the code faster. The CSDModel is a
good example.
But because people reviewing my code have asked me in the past to put
classes'
definitions at the top and I started using this style... I saw the
benefits myself and
I wanted to share this experience with you too.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/335/files#r13947669.

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

I don't know if I shared an opinion about this in the past, but I don't see any problem with the way @arokem implemented it.

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

Yes - you made this comment about the dti module, when I was working on some of the metrics (that are now implemented at the top of the file). And I've followed it since. It's worked very nicely for me so far.

This comment has been minimized.

@stefanv

stefanv Jun 27, 2014

Contributor

Ah, yes, so that makes sense to me: to define things so that any name seen must already have been defined (I somehow thought @Garyfallidis suggested that I said something different in the past).

This comment has been minimized.

@Garyfallidis

Garyfallidis Jul 2, 2014

Member

Yes, you asked me to change the CSD class to put it on the top of the file. Don't you remember?

This comment has been minimized.

@arokem

arokem Jul 2, 2014

Member

OK - I moved the classes back to the top of the file. We can have the
discussion about the general policy some other time.

On Wed, Jul 2, 2014 at 12:22 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/csdeconv.py:

  • if np.iterable(S0):
  •    # If it's an array, we need to give it one more dimension:
    
  •    S0 = S0[..., None]
    
  • This is the key operation: convolve and multiply by S0:

  • pre_pred_sig = S0 * np.dot(predict_matrix, sh_coeff)
  • Now put everything in its right place:

  • pred_sig = np.zeros(pre_pred_sig.shape[:-1] + (gtab.bvals.shape[0],))
  • pred_sig[..., ~gtab.b0s_mask] = pre_pred_sig
  • pred_sig[..., gtab.b0s_mask] = S0
  • return pred_sig

+class ConstrainedSphericalDeconvModel(OdfModel, Cache):

Yes, you asked me to change the CSD class to put it on the top of the file.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/335/files#r14477908.

@@ -16,6 +16,11 @@ class ReconstModel(object):
"""
def __init__(self, gtab):
"""

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 18, 2014

Member

""" Initialization ...
save some space ;)

Parameters
----------
dti_params : ndarray

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 18, 2014

Member

Where do you get the dti_params from? Can that come from the TensorFit and if yes how?

This comment has been minimized.

@arokem

arokem Jun 18, 2014

Member

They can come from the TensorFit. In fact, the tensor fit has a predict method, which simply calls this function (see below):

return tensor_prediction(self.model_params, gtab)

The reason I implemented this as a module-wide function is because Bago suggested that these different syntax options would be nice to have:

FitObject.predict(gtab) == ModelObject.predict(FitObject.model_params)

And I agree fully.

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

An example highlighting exactly these two usage mechanisms would be good, otherwise I'd make this method private like mentioned for the previous one (internal use only).

@@ -0,0 +1,152 @@
"""
Cross-validation analysis of diffusion models

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 18, 2014

Member

Many spaces..

@arokem

This comment has been minimized.

Member

arokem commented Jun 18, 2014

Thanks for the comments @Garyfallidis. Still working on an example. I should have that up later today.

assert_array_almost_equal(dmfit.predict(gtab, S0=100), S)
assert_array_almost_equal(dm.predict(dmfit.model_params, S0=100), S)

This comment has been minimized.

@Garyfallidis

Garyfallidis Jun 19, 2014

Member

Why you selected to have the predict method at the Model class and not at the Fit class?
I would think that because you have already the params after the model.fit call then it would make sense
to call predict from the Fit. Can you elaborate a bit on this?

This comment has been minimized.

@arokem

arokem Jun 19, 2014

Member

It's now implemented in both the Model and the Fit, following @MrBago's
suggestion. I think that's pretty nice. You can use the API you prefer,
depending on your use-case. Indeed, in the kfold cross-validation, I use
the Fit object's predict method.

On Wed, Jun 18, 2014 at 5:33 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

In dipy/reconst/tests/test_dti.py:

 assert_array_almost_equal(dmfit.predict(gtab, S0=100), S)

  • assert_array_almost_equal(dm.predict(dmfit.model_params, S0=100), S)

Why you selected to have the predict method at the Model class and not at
the Fit class?
I would think that because you have already the params after the model.fit
call then it would make sense
to call predict from the Fit. Can you elaborate a bit on this?


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/335/files#r13948572.

@arokem

This comment has been minimized.

Member

arokem commented Jun 20, 2014

The example is now ready for review. Thanks!

@arokem arokem referenced this pull request Jun 20, 2014

Merged

CSD fit issue #340

@arokem

This comment has been minimized.

Member

arokem commented Jun 25, 2014

Does anyone have the chance to take a look at the example? Other than that - any more comments? @MrBago - were all your previous comments properly addressed?

"""
Predict a signal from sh coefficients for this model
"""
return csd_predict(sh_coeff, self.gtab, response=self.response, S0=S0,

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Is csd_predict meant to be exposed to the public? Otherwise rename _csd_predict.

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

I don't see any reason that this should not be publicly exposed. I have now also documented the return value.

S(\theta, b) = S_0 * e^{-b ADC}
Where:

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

I guess this can be inline (dollar) math?

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

It can also be as it is.

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

But I've changed it just the same.

$\theta$ is a unit vector pointing at any direction on the sphere for
which a signal is to be predicted, $b$ is the b value provided in
the GradientTable input for that direction, $Q$ is the quadratic form of
the tensor determined by the input parameters

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

full stop

This comment has been minimized.

@arokem
sphere = Sphere(xyz=gtab.bvecs[~gtab.b0s_mask])
adc = apparent_diffusion_coef(qform, sphere)
if np.iterable(S0):

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Is this check used to figure out if it is an array?

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

Yep. Good catch.

Parameters
----------
r_rh : ndarray
ndarray of rotational harmonics coefficients for the single fiber

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Scratch "ndarray of". Specify dimensionality of ndarray in parameter type description.

This comment has been minimized.

@MrBago

MrBago Jun 25, 2014

Contributor

See @arokem this is the problem with moving blocks of code, you become responsible for for what's in them even if it's just a copy paste :). I agree with you that we should have a standard that we can all follow going forward, but fixing this issue as part of a big pull request like this makes the review more difficult. We end up reviewing things that really have nothing to do with the specific pull request. In this case reverting it is probably going to cause even more trouble. I propose that in the future moving code around, pep8 and those kinds of changes get their own PR so we can merge them quickly and move on.

This comment has been minimized.

@arokem

arokem Jun 25, 2014

Member

Whoops. Oh well, may as well fix these things to Stefan's satisfaction. The
problem with putting off these kinds of changes is that then they never get
taken care of. I see what you mean about making the review more difficult,
though. Sorry about that.

On Wed, Jun 25, 2014 at 11:20 AM, MrBago notifications@github.com wrote:

In dipy/reconst/shm.py:

@@ -42,6 +45,109 @@ def bandit(f):
return bandit

+def forward_sdeconv_mat(r_rh, n):

  • """ Build forward spherical deconvolution matrix
  • Parameters

  • r_rh : ndarray
  •    ndarray of rotational harmonics coefficients for the single fiber
    

See @arokem https://github.com/arokem this is the problem with moving
blocks of code, you become responsible for for what's in them even if it's
just a copy paste :). I agree with you that we should have a standard that
we can all follow going forward, but fixing this issue as part of a big
pull request like this makes the review more difficult. We end up reviewing
things that really have nothing to do with the specific pull request. In
this case reverting it is probably going to cause even more trouble. I
propose that in the future moving code around, pep8 and those kinds of
changes get their own PR so we can merge them quickly and move on.


Reply to this email directly or view it on GitHub
https://github.com/nipy/dipy/pull/335/files#r14202901.

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Yeah, sorry for the over-zealous review--couldn't have known until I got to
the end :)

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

I don't know what the shape of this thing is, but I've scratched out ndarray of...

ndarray of rotational harmonics coefficients for the single fiber
response function. Each element `rh[i]` is associated with spherical
harmonics of degree `2*i`.
n : ndarray

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Shape

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

Yeah - who knows.

""" Spherical harmonics (SH) to rotational harmonics (RH)
Calculate the rotational harmonic decomposition up to
harmonic sh_order for an axially and antipodally

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

Where does "sh_order" come into play?

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

Good luck with that

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

OK - on further thought I think that I figured that one out.

Returns
-------
dirac : ndarray

This comment has been minimized.

@stefanv

stefanv Jun 25, 2014

Contributor

dimensionality

This comment has been minimized.

@arokem

arokem Jun 27, 2014

Member

I think I got this one.

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

Thanks @stefanv for that over-zealous review ;-)

I hope that I have addressed most of your comments. In particular, I would like to hear more about what you were thinking about when you wrote that comment about scikit-learn. I'd be happy to know whether they have tools that we could just somehow apply to this problem and would do this for us. I doubt that's the case. Consider the handling of gradient-tables, S0, etc.

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

OK - I think that I have addressed all of @stefanv's comments. At the very least, I have responded to all of them :-)

Anything more? Are we ready to merge this thing?

@stefanv

This comment has been minimized.

Contributor

stefanv commented Jun 27, 2014

@arokem Any reason in particular you think the review was over-zealous? Along with the smiley, I'm not sure whether I should tone down any specific aspect to make my comments more acceptable. Usually when I do reviews the comments are meant as recommendations, so I leave it up to the author to decide whether to respond or not (and you did, to all of them).

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

Sorry - didn't mean to be too facetious. I was just amused by how detailed
it was. A good thing, probably. Didn't mean to offend. Well, just poke a
bit of fun at you really. Apologize if I did. I hope that I addressed most
of the comments.

On Fri, Jun 27, 2014 at 10:10 AM, Stefan van der Walt <
notifications@github.com> wrote:

@arokem https://github.com/arokem Any reason in particular you think
the review was over-zealous? Along with the smiley, I'm not sure whether I
should tone down any specific aspect to make my comments more acceptable.
Usually when I do reviews the comments are meant as recommendations, so I
leave it up to the author to decide whether to respond or not (and you did,
to all of them).


Reply to this email directly or view it on GitHub
#335 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

Oh sorry - still need to do round-tripping. Don't merge yet!

@stefanv

This comment has been minimized.

Contributor

stefanv commented Jun 27, 2014

Ah, no worries--I like people poking fun at me (as much as I dislike offending!). I am a total OCD programming nazi :)

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

OK - now it's really ready (I think)!

@stefanv

This comment has been minimized.

Contributor

stefanv commented Jun 27, 2014

Thanks, the doc update clarified things for me!

@arokem

This comment has been minimized.

Member

arokem commented Jun 27, 2014

Cool. Thanks again for the careful read.

On Fri, Jun 27, 2014 at 10:48 AM, Stefan van der Walt <
notifications@github.com> wrote:

Thanks, the doc update clarified things for me!


Reply to this email directly or view it on GitHub
#335 (comment).

@stefanv

This comment has been minimized.

stefanv commented on dipy/reconst/tests/test_csdeconv.py in a05799f Jun 27, 2014

Want some more OCD? I can mention PEP8 ;)

No, seriously, looks good--this is exactly what I had in mind.

This comment has been minimized.

Owner

arokem replied Jun 27, 2014

@arokem

This comment has been minimized.

Member

arokem commented Jul 2, 2014

Is this ready to be merged? Any more comments?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jul 2, 2014

Give me this afternoon to have a last check and if all is good will merge it.

@arokem

This comment has been minimized.

Member

arokem commented Jul 2, 2014

Sweet. Thanks!

On Wed, Jul 2, 2014 at 11:31 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Give me this afternoon to have a last check and if all is good will merge
it.


Reply to this email directly or view it on GitHub
#335 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jul 2, 2014

Nice work. Now @maurozucchelli and @mpaquette can write their predict functions for the SHORE etc and test them with your framework.

Garyfallidis added a commit that referenced this pull request Jul 2, 2014

@Garyfallidis Garyfallidis merged commit 8019449 into nipy:master Jul 2, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details
@arokem

This comment has been minimized.

Member

arokem commented Jul 2, 2014

Yip! GGT!

On Wed, Jul 2, 2014 at 1:39 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Merged #335 #335.


Reply to this email directly or view it on GitHub
#335 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment