New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Symmetric diffeomorphic non-linear registration #408

Merged
merged 67 commits into from Oct 8, 2014

Conversation

Projects
None yet
6 participants
@omarocegueda
Contributor

omarocegueda commented Aug 18, 2014

This pull request addresses the major pending issues from #346, especially the cython test coverage and the DiffeomorphicMap refactor to make the implementation more clear (the 'forward - backward' naming issue).

omarocegueda added some commits Aug 18, 2014

RF: adding discretization information for DiffeomorphicMap and renami…
…ng discretization information for domain and codomain. Now it is much simpler to handle default input/output discretization of the map, and the naming is much more clear.
TEST: added tests for diffeomorphic map simplification and affine war…
…ping in 2D and 3D. Removed deprecated cython code.
RF: removed the default value '-1' from warping functions, using None…
… instead and string 'identity' to disambiguate
RF: renamed 'disc_shape' and 'disc_affine' ('discretization shape, af…
…fine') to 'disp_shape' and 'disp_affine' ('displacement_shape, affine')
RF: renamed the the partial diffeomorphisms from 'forward_model' and …
…'backward_model' to 'static_to_ref' and 'moving_to_ref'
@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Aug 18, 2014

As Ariel suggested, I am writing an extension to the 2D registration tutorial. It may be helpful to discuss about this PR. You can find the current draft here: http://www.cimat.mx/~omar/syn/tutorial.pdf.

Thanks! =)

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Aug 25, 2014

Hi guys,

@omarocegueda to help the others get a better feeling of what you have changed here can you copy here the information that you send me in the e-mail showing a summary of the files that you changed and removed?

@matthew-brett can you have a look at this PR and merge it if you agree. It seems it was for both of us impossible to find time to work on automating the cython coverage. For me it is also difficult for the next weeks. Sorry for that.

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Aug 26, 2014

Sure!,
at the end of this message is the list of the defs Elef pointed out were
not being tested. You can also find a list of all defs in the cython
modules and the name of the functions where they're tested here:
www.cimat.mx/~omar/syn/cython_tests.pdf. Also, regarding the tutorial
extension suggested by Ariel, here are some videos I would like to include,
what do you think?: http://www.cimat.mx/~omar/syn/syn_videos.html .

Thanks! =)

defs called from tests:

In test_vector_fields.py
vfu.consolidate_2d <<<<<<<<<replaced with simplify_warp_function_2d
(and added test for it)
vfu.consolidate_3d <<<<<<<<<replaced with simplify_warp_function_3d
(and added test for it)

defs not called directly from tests but called:
from imwarp at SymmetricDiffeomorphicRegistration creation

prepend_affine_to_displacement_field_2d <<<<<<<<<<<<<<removed: no

longer needed since we now have more general simplification functions
prepend_affine_to_displacement_field_3d <<<<<<<<<<<<<<removed: no
longer needed since we now have more general simplification functions

append_affine_to_displacement_field_2d <<<<<<<<<<<<<<<removed: no

longer needed since we now have more general simplification functions
append_affine_to_displacement_field_3d <<<<<<<<<<<<<<<removed: no
longer needed since we now have more general simplification functions

_from imwarp at SymmetricDiffeomorphicRegistration optimize

expand_displacement_field_3d <<<< Added test:
test_resample_vector_field_3d() (renamed "expand" to "resample" because it
can be used for expanding and contracting)
expand_displacement_field_2d <<<< Added test:
test_resample_vector_field_2d() (renamed "expand" to "resample" because it
can be used for expanding and contracting)

defs not currently tested at all (dangerous!)

reorient_vector_field_2d <<<<<<<<<<<<<< Added test:
test_reorient_vector_field_2d()
reorient_vector_field_3d <<<<<<<<<<<<<< Added test:
test_reorient_vector_field_3d()
upsample_displacement_field <<<<<<<<<<<<<< removed: no longer needed since
we have more general resampling functions
upsample_displacement_field_3d <<<<<<<<<<<<<< removed: no longer needed
since we have more general resampling functions
accumulate_upsample_displacement_field3D <<<<<< removed: this was to save
memory, but it's no longer needed (we now free unused memory during
optimization)
downsample_scalar_field3D <<<<<<<<<<<<<< Added test:
test_downsample_scalar_field_3d
downsample_displacement_field3D <<<<<<<<<<<<<< Added test:
test_downsample_displacement_field_3d
downsample_scalar_field2D <<<<<<<<<<<<<< Added test:
test_downsample_scalar_field_2d
downsample_displacement_field2D <<<<<<<<<<<<<< Added test:
test_downsample_displacement_field_2d
get_displacement_range <<<<<<<<<<<<removed: no longer needed since now we
operate in the physical space
warp_volume_affine <<<<<<<<<<< Added test: test_affine_warping_3d()
warp_volume_affine_nn <<<<<<<<<<< Added test: test_affine_warping_3d()
warp_image_affine <<<<<<<<<<< Added test: test_affine_warping_2d()
warp_image_affine_nn <<<<<<<<<<< Added test: test_affine_warping_2d()
create_linear_displacement_field_2d <<<<<<<<<<<removed: no longer needed
since harmonic fields provide stronger test cases

On Mon, Aug 25, 2014 at 6:40 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Hi guys,

@omarocegueda https://github.com/omarocegueda to help the others get a
better feeling of what you have changed here can you copy here the
information that you send me in the e-mail showing a summary of the files
that you changed and removed?

@matthew-brett https://github.com/matthew-brett can you have a look at
this PR and merge it if you agree. It seems it was for both of us
impossible to find time to work on automating the cython coverage. For me
it is also difficult for the next weeks. Sorry for that.


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Aug 26, 2014

Thanks for this.

Yes, we do seem to have timed out on the Cython automated coverage.
Sorry, I'm in Cuba now, so I can't work on that very easily.

I will have a look at the PR, and get back to you in a couple of days.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Aug 26, 2014

Cool, thx! Say hi to Valia.

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Aug 27, 2014

Sounds good Matthew!, thank you very much! =)

On Tue, Aug 26, 2014 at 9:57 AM, Matthew Brett notifications@github.com
wrote:

Thanks for this.

Yes, we do seem to have timed out on the Cython automated coverage.
Sorry, I'm in Cuba now, so I can't work on that very easily.

I will have a look at the PR, and get back to you in a couple of days.


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Sep 1, 2014

Hi Omar - sorry to be slow.

Also - sorry - this is going to be all in one lump - I wrote this review on my
laptop offline, now posting to a slow internet connection.

Thanks for adding the tests that you have, that is very helpful.

My hope for the tests is, that if someone else edits your code so it returns
in the wrong result, the tests will catch that incorrect result. This is
important because it means the tests have to cover the case of different input
arguments - otherwise the code may only be confirmed to work for a single set
of input arguments.

I first looked through the cdefs. I found these:

solve2DSymmetricPositiveDefiniteSystem
solve_3d_semi_positive_definite

I have a vague memory of worrying about these names being inconsistent, I am
sorry if I forgot your response. Maybe:

solve_2d_symmetric_positive_definite

instead. I know these functions are small, but they should have tests, I
suggest making them 'cpdef' and adding some small tests. The other cdef
functions come up below.

I then looked at the def and cpdef functions in the pyx files. I used the
script appended at the end to analyze the files, it's very quick and dirty.

I think these functions still don't have any tests - is that right?

compute_residual_displacement_field_ssd_2d
iterate_residual_displacement_field_ssd_2d
compute_residual_displacement_field_ssd_3d
iterate_residual_displacement_field_ssd_3d
simplify_warp_function_2d
simplify_warp_function_3d

Then, for most of the other functions, there is only one call in the tests, as
far as I can see.

These guys have only 1 call in the tests:

crosscorr.compute_cc_backward_step_2d
crosscorr.compute_cc_backward_step_3d
crosscorr.compute_cc_forward_step_2d
crosscorr.compute_cc_forward_step_3d

Having a look at these now, I see the comment in the docstrings:

Currently, the gradient of the static image is not being used, but some
authors suggest that symmetrizing the gradient by including both, the moving
and static gradients may improve the registration quality. We are leaving 
this parameters as a placeholder for future investigation

First - I don't think it's a good idea to write a function that intentionally
ignores one of its input arguments. I suggest removing the relevant input
argument and putting in a note that the input could be included in another
version of the function. If we do get round to writing something based on the
relevant gradient, then we can write a new function, and then maybe pass the
relevant function into the registration as needed. Otherwise, if we do write
a new version, then we'll have the same function name, and argument list,
performing two different algorithms depending on the code version. If passing
the gradient helps the API somehow, then you could make a 'strategy' object
that encapsulates the CC call : http://en.wikipedia.org/wiki/Strategy_pattern

Second - I think the docstrings are not quite right - I think you're ignoring
the grad_moving for the forward functions, and grad_static for the
backward functions, but the note refers to grad_static for both types of
functions.

These two also have only one test call each:

crosscorr.precompute_cc_factors_2d
crosscorr.precompute_cc_factors_3d

They are - nicely - being tested against their respective _test functions,
but only for one set of arguments. Suggest iterating? Something like:

def test_precompute_cc_factors_2d():
    a = np.array(range(20*20), dtype=floating).reshape(20,20)
    b = np.array(range(20*20)[::-1], dtype=floating).reshape(20,20)
    for radius in (1, 3, 6):
        factors = cc.precompute_cc_factors_2d(a,b,radius)
        expected = cc.precompute_cc_factors_2d_test(a,b,radius)
        assert_array_almost_equal(factors, expected)

Here are some more with only one call each in the tests:

expectmax.compute_em_demons_step_2d
expectmax.compute_em_demons_step_3d

These tests look pretty substantial though, to my untrained eye. I think you
haven't tested the effect of different sigma_sq_x - is that the case? Have
you tested all of the isinf(sigma_sq_i), if(sigma_sq_i == 0) and nrm2 == 0 branches in the code?

By the way - I see you usually pass output arrays into Cython to be filled or
modified in-place. I see that you've elsewhere used the numpy pattern of
having an optional input parameter out=, which is used for the output if
given, otherwise the output array gets created in the function. Maybe this is
worth considering in other functions that might be used directly?

expectmax.compute_masked_class_stats_2d
expectmax.compute_masked_class_stats_3d

Again single calls in tests, but call looks good. Are you always checking
both branches in if(mask[k, i, j] != 0) and if(counts[i] > 1):?

expectmax.quantize_positive_2d
expectmax.quantize_positive_3d

Tests look pretty good for these guys.

For num_levels - is it right that the result is an image that has
num_levels discrete values if there is a 0 in the image, otherwise it has
num_levels-1 levels? It makes more sense to me to have num_levels be the
number of levels created in the routine, and therefore not including zero.
You might want to check the None, None, None return case for num_levels in (1, 0). Should the routine return None, None, None if the image is all
zeros?

sumsqdiff.compute_energy_ssd_2d
sumsqdiff.compute_energy_ssd_3d

I noticed this comment in test_sumsqdiff.py:

# Note: this should include the energy corresponding to the 
# regularization term, but it is discarded in ANTS (they just
# consider the data term, which is not the objective function
# being optimized). This test case should be updated after
# further investigation

In the code, I saw this docstring note:

Currently, this function only computes the SSD, but it is a special case
of the EM formulation. We are leaving the extra parameters as placeholders
for future generalization to the EM metric energy computation

and sure enough, these functions ignore the last 4 of the 5 input parameters.
Same comment as above - if these really are just computing on one input
parameter, then it seems better to me to name the functions accordingly, to
make that clear to someone coming after thinking of implementing a more
complete algorithm. Just as a matter of interest - is this a bug in ANTS do
you think?

sumsqdiff.compute_ssd_demons_step_2d
sumsqdiff.compute_ssd_demons_step_3d

One call as test, but the test looks reasonable - how about other
sigma_sq_x parameters, perhaps in a for loop? Have you tested the if den <1e-9: branch in the code?

vector_fields.compose_vector_fields_2d
vector_fields.compose_vector_fields_3d

Looking at vector_fields.compose_vector_fields_2d - this calls
_compose_vector_fields_2d which is a reasonable sized function - maybe 32
lines or so - with 5 input arguments. For example, I think you are only
testing time_scaling == 1; have you tested the non-overlapping case? Same
situation for 3D and _compose_vector_fields_3d, I believe.

These also have only one call:

vector_fields.downsample_displacement_field_2d
vector_fields.downsample_displacement_field_3d
vector_fields.downsample_scalar_field_2d
vector_fields.downsample_scalar_field_3d

You might want to also check non-even length image dimensions. For lines like
cdef int ns = field.shape[0] - don't these have to be npy_intp?

vector_fields.invert_vector_field_fixed_point_2d
vector_fields.invert_vector_field_fixed_point_3d

You might want to check affines that aren't identity for these functions.
Aren't the spacing values implied by the inverse of the w_to_img
affine via np.sqrt(sum(np.linalg.inv(w_to_img)[:3, :3] ** 2, axis=0)))?

vector_fields.resample_displacement_field_2d
vector_fields.resample_displacement_field_3d

These last two are fairly small functions, but they call these otherwise
untested cdefs:

cdef inline int interpolate_vector_bilinear
cdef inline int interpolate_scalar_bilinear
cdef inline int interpolate_scalar_nn_2d
cdef inline int interpolate_scalar_nn_3d
cdef inline int interpolate_scalar_trilinear
cdef inline int interpolate_vector_trilinear

which do have a fair amount of code. I think it would be worth expanding the
tests for the resample_displacement_field functions, and / or making the
cdef into cpdef and testing the underlying routines directly for
correctness. Am I right in thinking that you can do this by using say
scipy.ndimage to resample each x, y, z field to test against? I can imagine
other people using these functions too, another argument for making them
cpdef.

The cdef names are a bit confusing, maybe interpolate_vector_2d etc
would be more obvious?

In the tests, I notice the values in your factor vector are all the same
(e.g. [0.5, 0.5, 0.5]. Consider using other factor values that are not
all equal?

These also have only one test call:

vector_fields.warp_2d_affine
vector_fields.warp_2d_affine_nn
vector_fields.warp_3d_affine
vector_fields.warp_3d_affine_nn

Can you compare these to the result of calling
scipy.ndimage.map_coordinates? Sorry if that's a dumb question. Consider
renaming refShape to ref_shape to conform to standard PEP8 / dipy naming.
Test affine=None case?

These have two calls in tests each:

vector_fields.warp_2d
vector_fields.warp_2d_nn
vector_fields.warp_3d
vector_fields.warp_3d_nn

For these - can you test against scipy again? It looks as though your linear
and NN resamplings give the same answer - maybe you want to test something
where they are expected to give different answers. Test affine_idx_in !=None case? Test sampling_shape != None case?

These functions also have a couple of calls each:

create_circle
create_sphere

For create_circle and create_sphere, I think you are calling this to test
other functions, so not directly testing but assuming the output. Maybe test
the output.

vector_fields.reorient_vector_field_2d
vector_fields.reorient_vector_field_3d

I realized when looking at these functions that they are ignoring the
translation part of the affine, maybe you could make that more explicit in the
docstring for someone being slow like me?

Here are some extra tests I wrote for those functions:

import numpy as np

from dipy.align.vector_fields import (reorient_vector_field_2d,
                                    reorient_vector_field_3d)

from nibabel.affines import apply_affine, from_matvec

from numpy.testing import assert_almost_equal


def test_reorient_vector_field():
    # Test reorienting vector field
    for n_dims, func in ((2, reorient_vector_field_2d),
                        (3, reorient_vector_field_3d)):
        size = [20, 30, 40][:n_dims] + [n_dims]
        arr = np.random.normal(size = size)
        arr_32 = arr.astype(np.float32)
        affine = from_matvec(np.random.normal(size = (n_dims, n_dims)),
                            np.zeros(n_dims))
        func(arr_32, affine)
        assert_almost_equal(arr_32, apply_affine(affine, arr), 6)
        # Reorient reorients without translation
        trans = np.arange(n_dims) + 2
        affine[:-1, -1] = trans
        arr_32 = arr.astype(np.float32)
        func(arr_32, affine)
        assert_almost_equal(arr_32, apply_affine(affine, arr) - trans, 6)

For these guys:

vector_fields.create_harmonic_fields_2d
vector_fields.create_harmonic_fields_3d

You use these routines a few times, but I don't think you directly check the
returned result, you only use the result as further input to the tests. How
about some tests that they return the expected result, even if its just the
same algorithm in a slow version? Do these functions need to be in Cython in
fact - wouldn't they be better in Python? Some more explanation of the input
arguments in the docstrings would also help.

Same idea for:

vector_fields.create_random_displacement_2d
vector_fields.create_random_displacement_3d

(I think you use these for testing, but don't test their output directly)
(might be wrong though).

I noticed some stray tabs in test_imwarp.py docstrings - best to replace
these with spaces, they can be very confusing.

There's a typo on line 219 of test_sumsqdiff.py :
test_compute_energy_ssd_2s() should be test_compute_energy_ssd_2d().

Do you have pyflakes installed in your editor? It's a big help catching
these kinds of errors.

Here's the hackish script I used to analyze the function calls:

#!/usr/bin/env python
""" Print list of all defs, cpdefs, cdefs in pyx files """
from __future__ import division, print_function

DESCRIP = " Print list of all defs, cpdefs, cdefs in pyx files "
EPILOG = """ """

import os
import re
from os.path import join as pjoin, basename, splitext
from argparse import ArgumentParser, RawDescriptionHelpFormatter


def find_pyxes(root):
    pyxes = []
    for dirpath, dirnames, filenames in os.walk(root):
        dirnames[:] = [d for d in dirnames if d != 'build']
        for fbase in filenames:
            if not fbase.endswith('.pyx'):
                continue
            pyxes.append(pjoin(dirpath, fbase))
    return pyxes


def find_tests(root):
    test_files = []
    for dirpath, dirnames, filenames in os.walk(root):
        dirnames[:] = [d for d in dirnames if d != 'build']
        for fbase in filenames:
            if not 'test' in fbase:
                continue
            test_files.append(pjoin(dirpath, fbase))
    return test_files


DEF_RE = re.compile('\s*(def|cpdef)\s+([a-zA-Z0-9]+\s+)*([a-zA-z0-9_]*)\(')


def find_defs(pyx_fname):
    pyx_defs = {}
    with open(pyx_fname, 'rt') as fobj:
        lines = fobj.readlines()
    for no, line in enumerate(lines):
        def_match = DEF_RE.match(line)
        if def_match is None:
            continue
        def_type, ret_type, def_name = def_match.groups()
        assert def_name not in pyx_defs # probably will be so
        pyx_defs[def_name] = dict(
            type = def_type,
            line = line.strip(),
            line_no = no)
    return pyx_defs


def fuse_pyx_defs(pyx_defs, pyx_fnames):
    fused_defs = {}
    for pyx_fname, pyx_def in zip(pyx_fnames, pyx_defs):
        for name in pyx_def:
            assert not name in fused_defs # not usually the case
            fused_defs[name] = pyx_def[name]
            fused_defs[name]['pyxfile'] = pyx_fname
    return fused_defs


def find_defs_used(test_fname, pyx_defs):
    defs_used = {}
    with open(test_fname, 'rt') as fobj:
        lines = fobj.readlines()
    for name in pyx_defs:
        for no, line in enumerate(lines):
            if name in line and not ('test_' + name) in line:
                if name not in defs_used:
                    defs_used[name] = []
                defs_used[name].append((no, line.strip()))
    return defs_used


def fuse_defs_used(tests, tests_defs):
    """ Rearrange test defs by name """
    by_name = {}
    for test, test_def in zip(tests, tests_defs):
        for name in test_def:
            if name not in by_name:
                by_name[name] = []
            for tested_def in test_def[name]:
                line_no, line = tested_def
                by_name[name].append(
                    dict(test = test,
                         line_no = line_no,
                         line = line))
    return by_name


def print_fused_tests_defs(fused_tests_defs, fused_pyx_defs):
    for name in sorted(fused_tests_defs):
        fname = fused_pyx_defs[name]['pyxfile']
        print('{0}.{1}'.format(splitext(basename(fname))[0], name))
        for tester in fused_tests_defs[name]:
            print('    {}, {}, {}'.format(
                tester['test'],
                tester['line_no'],
                tester['line']))
    for name in fused_pyx_defs:
        if name not in fused_tests_defs:
            print(name, 'might be untested')


def main():
    parser = ArgumentParser(description=DESCRIP,
                            epilog=EPILOG,
                            formatter_class=RawDescriptionHelpFormatter)
    parser.add_argument('code_path',  type=str,
                        help='Code path for pyx files / tests')
    # parse the command line
    args = parser.parse_args()
    pyxes = find_pyxes(args.code_path)
    tests = find_tests(args.code_path)
    pyx_defs = [find_defs(pyx_fname) for pyx_fname in pyxes]
    fused_defs = fuse_pyx_defs(pyx_defs, pyxes)
    tests_defs = [find_defs_used(test, fused_defs) for test in tests]
    fused_tests_defs = fuse_defs_used(tests, tests_defs)
    print_fused_tests_defs(fused_tests_defs, fused_defs)


if __name__ == '__main__':
    main()
@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Sep 1, 2014

Wow! great review as always Matthew! =D
let me go through all your comments and do the appropriate
corrections/clarifications/extensions.
Thank you very much! =)

On Mon, Sep 1, 2014 at 2:18 PM, Matthew Brett notifications@github.com
wrote:

Hi Omar - sorry to be slow.

Also - sorry - this is going to be all in one lump - I wrote this review
on my
laptop offline, now posting to a slow internet connection.

Thanks for adding the tests that you have, that is very helpful.

My hope for the tests is, that if someone else edits your code so it
returns
in the wrong result, the tests will catch that incorrect result. This is
important because it means the tests have to cover the case of different
input
arguments - otherwise the code may only be confirmed to work for a single
set
of input arguments.

I first looked through the cdefs. I found these:

solve2DSymmetricPositiveDefiniteSystem
solve_3d_semi_positive_definite

I have a vague memory of worrying about these names being inconsistent, I
am
sorry if I forgot your response. Maybe:

solve_2d_symmetric_positive_definite

instead. I know these functions are small, but they should have tests, I
suggest making them 'cpdef' and adding some small tests. The other cdef
functions come up below.

I then looked at the def and cpdef functions in the pyx files. I used the
script appended at the end to analyze the files, it's very quick and dirty.

I think these functions still don't have any tests - is that right?

compute_residual_displacement_field_ssd_2d
iterate_residual_displacement_field_ssd_2d
compute_residual_displacement_field_ssd_3d
iterate_residual_displacement_field_ssd_3d
simplify_warp_function_2d
simplify_warp_function_3d

Then, for most of the other functions, there is only one call in the
tests, as
far as I can see.

These guys have only 1 call in the tests:

crosscorr.compute_cc_backward_step_2d
crosscorr.compute_cc_backward_step_3d
crosscorr.compute_cc_forward_step_2d
crosscorr.compute_cc_forward_step_3d

Having a look at these now, I see the comment in the docstrings:

Currently, the gradient of the static image is not being used, but some
authors suggest that symmetrizing the gradient by including both, the moving
and static gradients may improve the registration quality. We are leaving
this parameters as a placeholder for future investigation

First - I don't think it's a good idea to write a function that
intentionally
ignores one of its input arguments. I suggest removing the relevant input
argument and putting in a note that the input could be included in another
version of the function. If we do get round to writing something based on
the
relevant gradient, then we can write a new function, and then maybe pass
the
relevant function into the registration as needed. Otherwise, if we do
write
a new version, then we'll have the same function name, and argument list,
performing two different algorithms depending on the code version. If
passing
the gradient helps the API somehow, then you could make a 'strategy' object
that encapsulates the CC call :
http://en.wikipedia.org/wiki/Strategy_pattern

Second - I think the docstrings are not quite right - I think you're
ignoring
the grad_moving for the forward functions, and grad_static for the
backward functions, but the note refers to grad_static for both types of
functions.

These two also have only one test call each:

crosscorr.precompute_cc_factors_2d
crosscorr.precompute_cc_factors_3d

They are - nicely - being tested against their respective _test functions,
but only for one set of arguments. Suggest iterating? Something like:

def test_precompute_cc_factors_2d():
a = np.array(range(20_20), dtype=floating).reshape(20,20)
b = np.array(range(20_20)[::-1], dtype=floating).reshape(20,20)
for radius in (1, 3, 6):
factors = cc.precompute_cc_factors_2d(a,b,radius)
expected = cc.precompute_cc_factors_2d_test(a,b,radius)
assert_array_almost_equal(factors, expected)

Here are some more with only one call each in the tests:

expectmax.compute_em_demons_step_2d
expectmax.compute_em_demons_step_3d

These tests look pretty substantial though, to my untrained eye. I think
you
haven't tested the effect of different sigma_sq_x - is that the case? Have
you tested all of the isinf(sigma_sq_i), if(sigma_sq_i == 0) and nrm2 ==
0 branches in the code?

By the way - I see you usually pass output arrays into Cython to be filled
or
modified in-place. I see that you've elsewhere used the numpy pattern of
having an optional input parameter out=, which is used for the output if
given, otherwise the output array gets created in the function. Maybe this
is
worth considering in other functions that might be used directly?

expectmax.compute_masked_class_stats_2d
expectmax.compute_masked_class_stats_3d

Again single calls in tests, but call looks good. Are you always checking
both branches in if(mask[k, i, j] != 0) and if(counts[i] > 1):?

expectmax.quantize_positive_2d
expectmax.quantize_positive_3d

Tests look pretty good for these guys.

For num_levels - is it right that the result is an image that has
num_levels discrete values if there is a 0 in the image, otherwise it has
num_levels-1 levels? It makes more sense to me to have num_levels be the
number of levels created in the routine, and therefore not including zero.
You might want to check the None, None, None return case for num_levels in
(1, 0). Should the routine return None, None, None if the image is all
zeros?

sumsqdiff.compute_energy_ssd_2d
sumsqdiff.compute_energy_ssd_3d

I noticed this comment in test_sumsqdiff.py:

Note: this should include the energy corresponding to the

regularization term, but it is discarded in ANTS (they just

consider the data term, which is not the objective function

being optimized). This test case should be updated after

further investigation

In the code, I saw this docstring note:

Currently, this function only computes the SSD, but it is a special case
of the EM formulation. We are leaving the extra parameters as placeholders
for future generalization to the EM metric energy computation

and sure enough, these functions ignore the last 4 of the 5 input
parameters.
Same comment as above - if these really are just computing on one input
parameter, then it seems better to me to name the functions accordingly, to
make that clear to someone coming after thinking of implementing a more
complete algorithm. Just as a matter of interest - is this a bug in ANTS do
you think?

sumsqdiff.compute_ssd_demons_step_2d
sumsqdiff.compute_ssd_demons_step_3d

One call as test, but the test looks reasonable - how about other
sigma_sq_x parameters, perhaps in a for loop? Have you tested the if
den <1e-9: branch in the code?

vector_fields.compose_vector_fields_2d
vector_fields.compose_vector_fields_3d

Looking at vector_fields.compose_vector_fields_2d - this calls
_compose_vector_fields_2d which is a reasonable sized function - maybe 32
lines or so - with 5 input arguments. For example, I think you are only
testing time_scaling == 1; have you tested the non-overlapping case? Same
situation for 3D and _compose_vector_fields_3d, I believe.

These also have only one call:

vector_fields.downsample_displacement_field_2d
vector_fields.downsample_displacement_field_3d
vector_fields.downsample_scalar_field_2d
vector_fields.downsample_scalar_field_3d

You might want to also check non-even length image dimensions. For lines
like
cdef int ns = field.shape[0] - don't these have to be npy_intp?

vector_fields.invert_vector_field_fixed_point_2d
vector_fields.invert_vector_field_fixed_point_3d

You might want to check affines that aren't identity for these functions.
Aren't the spacing values implied by the inverse of the w_to_img
affine via np.sqrt(sum(np.linalg.inv(w_to_img)[:3, :3] ** 2, axis=0)))?

vector_fields.resample_displacement_field_2d
vector_fields.resample_displacement_field_3d

These last two are fairly small functions, but they call these otherwise
untested cdefs:

cdef inline int interpolate_vector_bilinear
cdef inline int interpolate_scalar_bilinear
cdef inline int interpolate_scalar_nn_2d
cdef inline int interpolate_scalar_nn_3d
cdef inline int interpolate_scalar_trilinear
cdef inline int interpolate_vector_trilinear

which do have a fair amount of code. I think it would be worth expanding
the
tests for the resample_displacement_field functions, and / or making the
cdef into cpdef and testing the underlying routines directly for
correctness. Am I right in thinking that you can do this by using say
scipy.ndimage to resample each x, y, z field to test against? I can imagine
other people using these functions too, another argument for making them
cpdef.

The cdef names are a bit confusing, maybe interpolate_vector_2d etc
would be more obvious?

In the tests, I notice the values in your factor vector are all the same
(e.g. [0.5, 0.5, 0.5]. Consider using other factor values that are not
all equal?

These also have only one test call:

vector_fields.warp_2d_affine
vector_fields.warp_2d_affine_nn
vector_fields.warp_3d_affine
vector_fields.warp_3d_affine_nn

Can you compare these to the result of calling
scipy.ndimage.map_coordinates? Sorry if that's a dumb question. Consider
renaming refShape to ref_shape to conform to standard PEP8 / dipy naming.
Test affine=None case?

These have two calls in tests each:

vector_fields.warp_2d
vector_fields.warp_2d_nn
vector_fields.warp_3d
vector_fields.warp_3d_nn

For these - can you test against scipy again? It looks as though your
linear
and NN resamplings give the same answer - maybe you want to test something
where they are expected to give different answers. Test affine_idx_in
!=None case? Test sampling_shape != None case?

These functions also have a couple of calls each:

create_circle
create_sphere

For create_circle and create_sphere, I think you are calling this to test
other functions, so not directly testing but assuming the output. Maybe
test
the output.

vector_fields.reorient_vector_field_2d
vector_fields.reorient_vector_field_3d

I realized when looking at these functions that they are ignoring the
translation part of the affine, maybe you could make that more explicit in
the
docstring for someone being slow like me?

Here are some extra tests I wrote for those functions:

import numpy as np

from dipy.align.vector_fields import (reorient_vector_field_2d,
reorient_vector_field_3d)

from nibabel.affines import apply_affine, from_matvec

from numpy.testing import assert_almost_equal

def test_reorient_vector_field():
# Test reorienting vector field
for n_dims, func in ((2, reorient_vector_field_2d),
(3, reorient_vector_field_3d)):
size = [20, 30, 40][:n_dims] + [n_dims]
arr = np.random.normal(size = size)
arr_32 = arr.astype(np.float32)
affine = from_matvec(np.random.normal(size = (n_dims, n_dims)),
np.zeros(n_dims))
func(arr_32, affine)
assert_almost_equal(arr_32, apply_affine(affine, arr), 6)
# Reorient reorients without translation
trans = np.arange(n_dims) + 2
affine[:-1, -1] = trans
arr_32 = arr.astype(np.float32)
func(arr_32, affine)
assert_almost_equal(arr_32, apply_affine(affine, arr) - trans, 6)

For these guys:

vector_fields.create_harmonic_fields_2d
vector_fields.create_harmonic_fields_3d

You use these routines a few times, but I don't think you directly check
the
returned result, you only use the result as further input to the tests. How
about some tests that they return the expected result, even if its just the
same algorithm in a slow version? Do these functions need to be in Cython
in
fact - wouldn't they be better in Python? Some more explanation of the
input
arguments in the docstrings would also help.

Same idea for:

vector_fields.create_random_displacement_2d
vector_fields.create_random_displacement_3d

(I think you use these for testing, but don't test their output directly)
(might be wrong though).

I noticed some stray tabs in test_imwarp.py docstrings - best to replace
these with spaces, they can be very confusing.

There's a typo on line 219 of test_sumsqdiff.py :
test_compute_energy_ssd_2s() should be test_compute_energy_ssd_2d().

Do you have pyflakes installed in your editor? It's a big help catching
these kinds of errors.

Here's the hackish script I used to analyze the function calls:

#!/usr/bin/env python
""" Print list of all defs, cpdefs, cdefs in pyx files """
from future import division, print_function

DESCRIP = " Print list of all defs, cpdefs, cdefs in pyx files "
EPILOG = """ """

import os
import re
from os.path import join as pjoin, basename, splitext
from argparse import ArgumentParser, RawDescriptionHelpFormatter

def find_pyxes(root):
pyxes = []
for dirpath, dirnames, filenames in os.walk(root):
dirnames[:] = [d for d in dirnames if d != 'build']
for fbase in filenames:
if not fbase.endswith('.pyx'):
continue
pyxes.append(pjoin(dirpath, fbase))
return pyxes

def find_tests(root):
test_files = []
for dirpath, dirnames, filenames in os.walk(root):
dirnames[:] = [d for d in dirnames if d != 'build']
for fbase in filenames:
if not 'test' in fbase:
continue
test_files.append(pjoin(dirpath, fbase))
return test_files

DEF_RE = re.compile('\s_(def|cpdef)\s+([a-zA-Z0-9]+\s+)([a-zA-z0-9]*)(')

def find_defs(pyx_fname):
pyx_defs = {}
with open(pyx_fname, 'rt') as fobj:
lines = fobj.readlines()
for no, line in enumerate(lines):
def_match = DEF_RE.match(line)
if def_match is None:
continue
def_type, ret_type, def_name = def_match.groups()
assert def_name not in pyx_defs # probably will be so
pyx_defs[def_name] = dict(
type = def_type,
line = line.strip(),
line_no = no)
return pyx_defs

def fuse_pyx_defs(pyx_defs, pyx_fnames):
fused_defs = {}
for pyx_fname, pyx_def in zip(pyx_fnames, pyx_defs):
for name in pyx_def:
assert not name in fused_defs # not usually the case
fused_defs[name] = pyx_def[name]
fused_defs[name]['pyxfile'] = pyx_fname
return fused_defs

def find_defs_used(test_fname, pyx_defs):
defs_used = {}
with open(test_fname, 'rt') as fobj:
lines = fobj.readlines()
for name in pyx_defs:
for no, line in enumerate(lines):
if name in line and not ('test_' + name) in line:
if name not in defs_used:
defs_used[name] = []
defs_used[name].append((no, line.strip()))
return defs_used

def fuse_defs_used(tests, tests_defs):
""" Rearrange test defs by name """
by_name = {}
for test, test_def in zip(tests, tests_defs):
for name in test_def:
if name not in by_name:
by_name[name] = []
for tested_def in test_def[name]:
line_no, line = tested_def
by_name[name].append(
dict(test = test,
line_no = line_no,
line = line))
return by_name

def print_fused_tests_defs(fused_tests_defs, fused_pyx_defs):
for name in sorted(fused_tests_defs):
fname = fused_pyx_defs[name]['pyxfile']
print('{0}.{1}'.format(splitext(basename(fname))[0], name))
for tester in fused_tests_defs[name]:
print(' {}, {}, {}'.format(
tester['test'],
tester['line_no'],
tester['line']))
for name in fused_pyx_defs:
if name not in fused_tests_defs:
print(name, 'might be untested')

def main():
parser = ArgumentParser(description=DESCRIP,
epilog=EPILOG,
formatter_class=RawDescriptionHelpFormatter)
parser.add_argument('code_path', type=str,
help='Code path for pyx files / tests')
# parse the command line
args = parser.parse_args()
pyxes = find_pyxes(args.code_path)
tests = find_tests(args.code_path)
pyx_defs = [find_defs(pyx_fname) for pyx_fname in pyxes]
fused_defs = fuse_pyx_defs(pyx_defs, pyxes)
tests_defs = [find_defs_used(test, fused_defs) for test in tests]
fused_tests_defs = fuse_defs_used(tests, tests_defs)
print_fused_tests_defs(fused_tests_defs, fused_defs)

if name == 'main':
main()


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Sep 2, 2014

Thanks - I'll try to be quick to re-review.

Here's a gist for the script to check instances of pyx function calls in tests : https://gist.github.com/matthew-brett/ea2c713be6edc2ab29ca

BF: accelerate registration and vector field inversion algorithms (th…
…is subtle detail in ANTS was responsible for the 1% Jaccard index difference between Dipy's module and ANTS), now we are able to closely replicate even ANT's energy profile. I validated this change by running 306 registrations and verified that the results are closer to ANTS both in anatomical region overlap and CSF/WM/GM overlap. Updated regression tests accordingly.
@arokem

This comment has been minimized.

Member

arokem commented Sep 30, 2014

I just ran this on my mac as well (with python 3.4) and I don't see these
test failures either.

Can we run the build-bots on a specified branch or commit?

On Tue, Sep 30, 2014 at 2:05 PM, Omar Ocegueda notifications@github.com
wrote:

For the residual displacement field function tests - can you maybe put a
TODO
note or similar in a visible place like the module docstring to remind us
that
you're planning on doing this and what you thought needed doing?

I just added the missing tests. Unfortunately, the iteration result of the
Gauss-Seidel solver for the large linear systems depends on the order in
which the voxels are visited, so I had to just replicate the current Cython
implementation in Python but using the numpy linear solver for the
per-voxel small systems (instead of the fast Cython implementations). I
hope these tests are acceptable for you guys.

Alright!, I think I finished all the tests, the only remaining thing is to
see what's going on with the Mac failures. I don't know how to replicate
the failures, everything is working correctly on my friend's Mac. Any ideas?

Thanks! =)
-Omar.


Reply to this email directly or view it on GitHub
#408 (comment).

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 1, 2014

I wonder if you are all using MKL via Anaconda?

Could someone try:

virtualenv test-warp
. test-warp/bin/activate
pip install numpy scipy nibabel

And then run the tests in that virtualenv?

@arokem

This comment has been minimized.

Member

arokem commented Oct 1, 2014

That's not going to be that easy:

WARNING: using virtualenv with Anaconda is untested and not recommended.

We suggest using the conda command to create environments instead.

For more information about creating conda environments, please see:

     http://docs.continuum.io/conda/examples/create.html

On Wed, Oct 1, 2014 at 8:59 AM, Matthew Brett notifications@github.com
wrote:

I wonder if you are all using MKL via Anaconda?

Could someone try:

virtualenv test-warp
. test-warp/bin/activate
pip install numpy scipy nibabel

And then run the tests in that virtualenv?


Reply to this email directly or view it on GitHub
#408 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 3, 2014

Any progress in the Mac world? @arokem, @omarocegueda, @matthew-brett ? Apart from that is there anything else that is left to be done for this PR?

@arokem

This comment has been minimized.

Member

arokem commented Oct 3, 2014

It seems that things work fine on an anaconda install, and fail in a very
minor way otherwise. I'm inclined to recommend anaconda to our Mac users
anyway (and that is what we do in our docs)

On Friday, October 3, 2014, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Any progress in the Mac world? @arokem https://github.com/arokem,
@omarocegueda https://github.com/omarocegueda, @matthew-brett
https://github.com/matthew-brett ? Apart from that is there anything
else that is left to be done for this PR?


Reply to this email directly or view it on GitHub
#408 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 3, 2014

Okay @arokem, we will also have a better idea from the OSX buildbots when this is merged.

@matthew-brett what do you think? Is there something else that needs to be done in this PR?

@arokem

This comment has been minimized.

Member

arokem commented Oct 3, 2014

The build-bots will fail in the same way that Matthew's mac has, unless we
set them up to grab anaconda. Which doesn't seem like a bad idea to me. I
have been experimenting with that on the travis bot as well (so far with
little success). Is there any way to run the build-bots on a particular
commit/branch?

On Fri, Oct 3, 2014 at 8:55 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Okay @arokem https://github.com/arokem, we will also have a better idea
from the OSX buildbots when this is merged.

@matthew-brett https://github.com/matthew-brett what do you think? Is
there something else that need to be done in this PR?


Reply to this email directly or view it on GitHub
#408 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 3, 2014

I am not sure but I think you can. Look here http://nipy.bic.berkeley.edu/builders/dipy-py2.7-osx-10.6
You can set the repository name and branch.

You will need to login first.

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 3, 2014

I had the same problem as @arokem (the warning about using virtualenv with
anaconda), so I removed anaconda and am installing the dependencies
individually on the mac, I am having some difficulties because I don't have
root privileges, though. I will let you know as soon as I have some
progress.

On Fri, Oct 3, 2014 at 10:39 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Any progress in the Mac world? @arokem https://github.com/arokem,
@omarocegueda https://github.com/omarocegueda, @matthew-brett
https://github.com/matthew-brett ? Apart from that is there anything
else that is left to be done for this PR?


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 3, 2014

Surely you can just install Python from Python.org or homebrew, then set
the path to that and do the install, while leaving Anaconda it's own merry
and lonely world?

@jchoude

This comment has been minimized.

Contributor

jchoude commented Oct 3, 2014

As an aside, concerning the virtual env warning while using Anaconda, you
can use Anaconda's own version of this, using the "conda create" command.
It's almost the same as a virtual env, but uses symlinks / hardlinks for
libs that are already fetched, instead of installing a copy in each virtual
env.

JC

2014-10-03 14:39 GMT-04:00 Matthew Brett notifications@github.com:

Surely you can just install Python from Python.org or homebrew, then set
the path to that and do the install, while leaving Anaconda it's own merry
and lonely world?


Reply to this email directly or view it on GitHub
#408 (comment).

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 3, 2014

It's not very difficult to allow work-in-progress builds on the buildbots, but it will be hard to do from here in Cuba. I will have a look, but maybe we can track it down without...

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 3, 2014

Thank you JC!, I tried that but the tests still passed. So, I guess the
"special thing" that makes the tests pass in Anaconda and fail otherwise is
something done by default in the anaconda distribution. I don't think MKL
is being used because according to this:
http://docs.continuum.io/mkl-optimizations/index.html, it has to be
manually installed, and I didn't do that.

On Fri, Oct 3, 2014 at 1:44 PM, Jean-Christophe Houde <
notifications@github.com> wrote:

As an aside, concerning the virtual env warning while using Anaconda, you
can use Anaconda's own version of this, using the "conda create" command.
It's almost the same as a virtual env, but uses symlinks / hardlinks for
libs that are already fetched, instead of installing a copy in each
virtual
env.

JC

2014-10-03 14:39 GMT-04:00 Matthew Brett notifications@github.com:

Surely you can just install Python from Python.org or homebrew, then set
the path to that and do the install, while leaving Anaconda it's own
merry
and lonely world?


Reply to this email directly or view it on GitHub
#408 (comment).


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@jchoude

This comment has been minimized.

Contributor

jchoude commented Oct 3, 2014

Oh, yeah, I didn't mean to say this would overcome your limitation for the
MKL testing. I just wanted to remark about the conda create command.

2014-10-03 15:03 GMT-04:00 Omar Ocegueda notifications@github.com:

Thank you JC!, I tried that but the tests still passed. So, I guess the
"special thing" that makes the tests pass in Anaconda and fail otherwise
is
something done by default in the anaconda distribution. I don't think MKL
is being used because according to this:
http://docs.continuum.io/mkl-optimizations/index.html, it has to be
manually installed, and I didn't do that.

On Fri, Oct 3, 2014 at 1:44 PM, Jean-Christophe Houde <
notifications@github.com> wrote:

As an aside, concerning the virtual env warning while using Anaconda,
you
can use Anaconda's own version of this, using the "conda create"
command.
It's almost the same as a virtual env, but uses symlinks / hardlinks for
libs that are already fetched, instead of installing a copy in each
virtual
env.

JC

2014-10-03 14:39 GMT-04:00 Matthew Brett notifications@github.com:

Surely you can just install Python from Python.org or homebrew, then
set
the path to that and do the install, while leaving Anaconda it's own
merry
and lonely world?


Reply to this email directly or view it on GitHub
#408 (comment).


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar


Reply to this email directly or view it on GitHub
#408 (comment).

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 5, 2014

UPD: I just reproduced the test failures in Ubuntu by NOT installing Anaconda (I just wanted to confirm that it is not an OSX-specific failure).

UPD: the tests are now passing with and without anaconda on Ubuntu and Windows, unfortunately I no longer have access to a mac to verify they pass on OSX without anaconda (I guess they will still pass on OSX with Anaconda, since they were previously working). @matthew-brett, if you have a chance, could you please try?.

In summary, what I did was to modify the test volumes so that the boundary slices were "empty" (zeros), the reason is that I observed that small numerical differences (less than 1e-9) caused relatively large differences on the energy profile: when we attempt to interpolate "outside" the volume, we return zero, let's say the x interpolation coordinate on one platform was computed as -1e-10 and 1e-10 on another platform, if there are "white" (1.0) voxels at the boundary (as it was the case), the interpolation result will be zero in the first platform (-1e-9 is outside the volume) and one on the other (1e-9 is still inside the volume and it's "surrounded" by white values), therefore, the energy profile difference will be very large (1.0) and it will cause even larger differences later on (later, the gradient on one warped image will be very different from the other). As far as I know, this "threshold" is standard in interpolation but we may try a more stable procedure in the future (e.g. don't return zero but make it decrease to zero linearly).

omarocegueda added some commits Oct 6, 2014

TEST: using more stable input images for SSD and EM metrics in 3D (bo…
…th steps), since the invertible displacement field is in 3D, having non-zero values near the first and last slices makes the test not stable across platforms (small numerical differences near the boundary may make the energy profile very different). I simply increased the volume size and set zeros at the boundary slices.
TEST: forgot to remove the last level of the scale space in test_em_3…
…d_demons. Added a couple more empty slices in test_ssd_3d_gauss_newton (the difference was about 1e-5, but with the extra slices we don't need to decrease the precision of the comparison)
@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 7, 2014

Great - thanks for the update...

@arokem

This comment has been minimized.

Member

arokem commented Oct 7, 2014

@matthew-brett - does this settle all the issues on this PR?

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 7, 2014

Omar - can you open an issue for automating the cluster test in some standard place?

Otherwise - yes - I think it's ready to merge. Great work Omar - thanks very much for your patience.

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 8, 2014

Awesome! Thank you Matthew! I just opened the issue =)

arokem added a commit that referenced this pull request Oct 8, 2014

Merge pull request #408 from omarocegueda/syn
Symmetric diffeomorphic non-linear registration

@arokem arokem merged commit 140e58f into nipy:master Oct 8, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details
@arokem

This comment has been minimized.

Member

arokem commented Oct 8, 2014

Hooray!

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Oct 8, 2014

@omarocegueda you should celebrate today! The drinks are on me! Thank you so much brother! Let's start using this diffeomorphic beast! Many thx also to the reviewers for the thorough reviews.

Let's try smaller PRs from now on! But let's not forget "To infinity and beyond!!!"

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 8, 2014

Thank you everyone! this means a lot to me =) definitely, smaller PRs from
now on =D

On Wed, Oct 8, 2014 at 12:44 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

@omarocegueda https://github.com/omarocegueda you should celebrate
today! The drinks are on me! Thank you so much brother! Let's start using
this diffeomorphic beast! Many thx also to the reviewers for the thorough
reviews.

Let's try smaller PRs from now on! But let's not forget "To infinity and
beyond!!!"


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@maurozucchelli

This comment has been minimized.

Contributor

maurozucchelli commented Oct 9, 2014

Congratulations to all the registration team! :D

On Wed, Oct 8, 2014 at 7:59 PM, Omar Ocegueda notifications@github.com
wrote:

Thank you everyone! this means a lot to me =) definitely, smaller PRs from
now on =D

On Wed, Oct 8, 2014 at 12:44 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

@omarocegueda https://github.com/omarocegueda you should celebrate
today! The drinks are on me! Thank you so much brother! Let's start
using
this diffeomorphic beast! Many thx also to the reviewers for the
thorough
reviews.

Let's try smaller PRs from now on! But let's not forget "To infinity and
beyond!!!"


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar


Reply to this email directly or view it on GitHub
#408 (comment).

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 9, 2014

Great to see this merged, thanks again - no test failures on OSX on my machine at least.

While I was offline, I had some thoughts ...

I guess these boundary issues will come up often in practice.

For scipy.ndimage we can set the default to be something like 'reflect' for
interpolation at the boundary - that seems a reasonable choice for the bottom
of the brain image (inferior) which are almost always non-zero at the
boundary.

Is it worth adding an issue for the explicit mask array?

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Oct 10, 2014

Agreed!, we should avoid that kind of unstable behavior across platforms as
much as possible!, it is quite hard, though, even printing a double can
change its value:
http://apps.topcoder.com/forums/?module=Thread&threadID=514379&start=0&mc=3#569630.
So, whenever we use thresholds on floating point variables (like in "if
x<0:"), we are potentially introducing some unstable behavior across
platforms because of hardware limitations. Indeed, from the latest run on
the buildbots, the tests are failing on a 32 bit platform, I am
investigating. Regarding the boundaries, reflecting to the boundary sounds
good, I'll try that =).

On Thu, Oct 9, 2014 at 4:04 PM, Matthew Brett notifications@github.com
wrote:

Great to see this merged, thanks again - no test failures on OSX on my
machine at least.

While I was offline, I had some thoughts ...

I guess these boundary issues will come up often in practice.

For scipy.ndimage we can set the default to be something like 'reflect' for
interpolation at the boundary - that seems a reasonable choice for the
bottom
of the brain image (inferior) which are almost always non-zero at the
boundary.

Is it worth adding an issue for the explicit mask array?


Reply to this email directly or view it on GitHub
#408 (comment).

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@matthew-brett

This comment has been minimized.

Member

matthew-brett commented Oct 10, 2014

Sure - we certainly have to be careful of precision problems - but I often find that investigating reveals deeper problems that I haven't thought of - sadly.

What does ITK / ANTS do about the interpolation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment