New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance SLR with applying series of transformations and fix optimize bug for parameter missing in old scipy versions #470

Closed
wants to merge 13 commits into
base: master
from

Conversation

Projects
None yet
3 participants
@Garyfallidis
Member

Garyfallidis commented Nov 20, 2014

So, now you can start with rigid and then do similarity and then affine like you would do in ANTs. You can just give the matrix of the previous optimization. @omarocegueda this implements what we discussed before. It may give us some ideas for the linear image-based registration. GGT!

@Garyfallidis Garyfallidis changed the title from Enhance SLR with applying series of transformations to WIP: Enhance SLR with applying series of transformations Nov 20, 2014

@arokem

This comment has been minimized.

Member

arokem commented Nov 20, 2014

Great addition to these methods. I think that it would be useful to have
some examples of a pipeline mimicking the ANTS pipeline. What do you think?

On Wed, Nov 19, 2014 at 7:05 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

So, now you can start with rigid and then do similarity and then affine
like you would do in ANTs. You can just give the matrix of the previous
optimization. @omarocegueda https://github.com/omarocegueda this
implements what we discussed before. It may give us some ideas for the

linear image-based registration. GGT!

You can merge this Pull Request by running

git pull https://github.com/Garyfallidis/dipy enhance_slr

Or view, comment on, or merge it at:

#470
Commit Summary

  • ENH: allowing cascades of linear transforms e.g. first rigid then
    affine etc.
  • TEST: checking different testing ideas for the cascade
  • Removed some empty lines
  • TEST: Completed test for applying a series of transformations

File Changes

Patch Links:


Reply to this email directly or view it on GitHub
#470.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

I agree. Even more than that I think we need to have first thing after the release a discussion about how to make these tools easily to use from people who are not experts in python etc. I have some ideas but I want your opinion. Will start a discussion in the list about it. GGT!!!

@arokem

This comment has been minimized.

Member

arokem commented Nov 20, 2014

OK - ping me when you have a short example here, and I will give the whole
thing a review at the same time.

On Thu, Nov 20, 2014 at 12:35 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

I agree. Even more than that I think we need to have first thing after the
release a discussion about how to make these tools easily to use from
people who are not experts in python etc. I have some ideas but I want your
opinion. Will start a discussion in the list about it. GGT!!!


Reply to this email directly or view it on GitHub
#470 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

Look at the cascade test in test_optimize. The idea is clear.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

I am not planning to write an example right now. I need first that @jchoude uploads some new datasets. Then I will have two detailed examples both for bundle-based and whole-brain registration. But when this PR starts working we have to merge it before the release because it resolves a bug with the different L-BFGS-B version between scipy 0.11, 0.12, and 0.13.

@arokem

This comment has been minimized.

Member

arokem commented Nov 20, 2014

Gotcha - will take a look, but probably not before late Friday.

On Thu, Nov 20, 2014 at 12:44 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

I am not planning to write an example right now. I need first that
@jchoude https://github.com/jchoude uploads some new datasets. Then I
will have two detailed examples both for bundle-based and whole-brain
registration. But when this PR starts working we have to merge it before
the release because it resolves a bug with the different L-BFGS-B version
between scipy 0.11, 0.12, and 0.13.


Reply to this email directly or view it on GitHub
#470 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

Travis is ill. The build is stalled. Will try to push an null commit and rebuild.

@Garyfallidis Garyfallidis changed the title from WIP: Enhance SLR with applying series of transformations to Enhance SLR with applying series of transformations Nov 20, 2014

mat : array
Transformation (4, 4) matrix to start the registration. ``mat``
is applied to moving. Default value None which means that initial
transformation will be generated by shiftin the centers of moving

This comment has been minimized.

@arokem

arokem Nov 20, 2014

Member

typo: shifting

This comment has been minimized.

@arokem

arokem Nov 20, 2014

Member

This seems like a strange default. Isn't it better to have the default be np.eye(4)? That is, the default is not to pre-apply any transformation?

This comment has been minimized.

@Garyfallidis

Garyfallidis Nov 20, 2014

Member

No eye(4) would be a very bad strategy. The default idea in registration is that you first find the center of mass of the two objects that you want to register and then you move them to the origin. In that way you make sure that your rotations during the registration will make sense. Is that clear?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 20, 2014

@arokem do you know why Travis is misbehaving? Any ideas?

@arokem

This comment has been minimized.

Member

arokem commented Nov 20, 2014

I don't know why, but I have noticed that it's slowed down recently. Our
test suite also keeps getting longer, so probability of failure due to
time-out increases.

On Thu, Nov 20, 2014 at 1:56 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

@arokem https://github.com/arokem do you know why Travis is
misbehaving? Any ideas?


Reply to this email directly or view it on GitHub
#470 (comment).

@arokem

This comment has been minimized.

arokem commented on 0a05eab Nov 21, 2014

This commit should probably be a separate PR

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

As I just commented - the fixes to different versions of L-BFGS-B seems
like an issue unrelated to the title of this PR. please rebase to submit
this as a separate PR. Shouldn't be too hard, considering that the action
is in different files altogether.

On Thu, Nov 20, 2014 at 12:44 PM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

I am not planning to write an example right now. I need first that
@jchoude https://github.com/jchoude uploads some new datasets. Then I
will have two detailed examples both for bundle-based and whole-brain
registration. But when this PR starts working we have to merge it before
the release because it resolves a bug with the different L-BFGS-B version
between scipy 0.11, 0.12, and 0.13.


Reply to this email directly or view it on GitHub
#470 (comment).

@stefanv

This comment has been minimized.

Contributor

stefanv commented Nov 21, 2014

Note that you can retrigger the Travis build by being logged in and
clicking the rebuild icon.

@Garyfallidis Garyfallidis changed the title from Enhance SLR with applying series of transformations to Enhance SLR with applying series of transformations and fix optimize bug for parameter missing in old scipy versions Nov 21, 2014

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

@arokem this PR has only 4 files. I see no point on creating a new PR. But I see the point that the title was not giving the information about the fix in L-BFGS-B. So I updated the title.

@stefanv yeah I played a bit yesterday with rebuilding Travis. Thx for the tip.

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

The main point in creating another PR is to lighten the burden in dividing
up the work in review. One of our main bottle-necks in development
currently is how fast we can get our PRs reviewed and merged. If we can
split up the review into small and manageable bite-sized chunks, we have a
chance of getting more people to participate.

It's just how we should do things, so I didn't think I should just let it
slide.

On Fri, Nov 21, 2014 at 8:36 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

@arokem https://github.com/arokem this PR has only 4 files. I see no
point on creating a new PR. But I see the point that the title was not
giving the information about the fix in L-BFGS-B. So I updated the title.

@stefanv https://github.com/stefanv yeah I played a bit yesterday with
rebuilding Travis. Thx for the tip.


Reply to this email directly or view it on GitHub
#470 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

Okay @arokem. Will make a separate PR. But first I need some help to figure out why Travis is stalling.
Travis is giving me the message that it hasn't received any stdout/err for the last 10 minutes which sounds a bit strange. Looking at stefan's latest PR it took only 15-16' for Travis to run. But this PR takes more than 20'. @stefanv any ideas? Is there a way to increase the amount of waiting for nosetests to finish in Travis?

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

Cool. I'm also worried about the Travis thing - as our test-suite gets
larger, we are likely to hit this more and more (says the person who in his
last PR has a test that takes a few seconds to run...).

I just retriggered Travis on this as well, to see how things are going.

On Fri, Nov 21, 2014 at 10:34 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Okay @arokem https://github.com/arokem. Will make a separate PR. But
first I need some help to figure out why Travis is stalling.
Travis is giving me the message that it hasn't received any stdout/err for
the last 10 minutes which sounds a bit strange. Looking at stefan's latest
PR it took only 15-16' for Travis to run. But this PR takes more than 20'.
@stefanv https://github.com/stefanv any ideas? Is there a way to
increase the amount of waiting for nosetests to finish in Travis?


Reply to this email directly or view it on GitHub
#470 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

It seems likely that it's something about this PR. While this was running
and stalling, #472 ran through with no particular issues.

On Fri, Nov 21, 2014 at 10:37 AM, Ariel Rokem arokem@gmail.com wrote:

Cool. I'm also worried about the Travis thing - as our test-suite gets
larger, we are likely to hit this more and more (says the person who in his
last PR has a test that takes a few seconds to run...).

I just retriggered Travis on this as well, to see how things are going.

On Fri, Nov 21, 2014 at 10:34 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Okay @arokem https://github.com/arokem. Will make a separate PR. But
first I need some help to figure out why Travis is stalling.
Travis is giving me the message that it hasn't received any stdout/err
for the last 10 minutes which sounds a bit strange. Looking at stefan's
latest PR it took only 15-16' for Travis to run. But this PR takes more
than 20'. @stefanv https://github.com/stefanv any ideas? Is there a
way to increase the amount of waiting for nosetests to finish in Travis?


Reply to this email directly or view it on GitHub
#470 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

Some more verbose test output might be helpful here:
#473

On Fri, Nov 21, 2014 at 10:51 AM, Ariel Rokem arokem@gmail.com wrote:

It seems likely that it's something about this PR. While this was running
and stalling, #472 ran through with no particular issues.

On Fri, Nov 21, 2014 at 10:37 AM, Ariel Rokem arokem@gmail.com wrote:

Cool. I'm also worried about the Travis thing - as our test-suite gets
larger, we are likely to hit this more and more (says the person who in his
last PR has a test that takes a few seconds to run...).

I just retriggered Travis on this as well, to see how things are going.

On Fri, Nov 21, 2014 at 10:34 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Okay @arokem https://github.com/arokem. Will make a separate PR. But
first I need some help to figure out why Travis is stalling.
Travis is giving me the message that it hasn't received any stdout/err
for the last 10 minutes which sounds a bit strange. Looking at stefan's
latest PR it took only 15-16' for Travis to run. But this PR takes more
than 20'. @stefanv https://github.com/stefanv any ideas? Is there a
way to increase the amount of waiting for nosetests to finish in Travis?


Reply to this email directly or view it on GitHub
#470 (comment).

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

Yes, travis stalls in test_cascade... Hmm.. I cannot replicate this problem with virtualenv. But definitely not giving up yet.

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

This test takes a while on my machine and then fails with:


======================================================================
FAIL: dipy.align.tests.test_streamlinear.test_cascade_of_optimizations
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/Users/arokem/anaconda3/envs/py2/lib/python2.7/site-packages/nose/case.py",
line 197, in runTest
    self.test(*self.arg)
  File "/Users/arokem/source/dipy/dipy/align/tests/test_streamlinear.py",
line 479, in test_cascade_of_optimizations
    assert_(slm3.iterations <= slm4.iterations)
  File
"/Users/arokem/anaconda3/envs/py2/lib/python2.7/site-packages/numpy/testing/utils.py",
line 53, in assert_
    raise AssertionError(smsg)
AssertionError
----------------------------------------------------------------------
Ran 1 test in 36.705s

Upon debugging:



(Pdb) slm3



(Pdb) slm3.iterations

401

(Pdb) slm4.iterations

317

On Fri, Nov 21, 2014 at 11:24 AM, Eleftherios Garyfallidis <
notifications@github.com> wrote:

Yes, travis stalls in test_cascade... Hmm.. I cannot replicate this
problem with virtualenv. But definitely not giving up yet.


Reply to this email directly or view it on GitHub
#470 (comment).

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

Here's my 'pip list' from this environment, in case that's helpful in diagnosing:

backports.ssl-match-hostname (3.4.0.2)
cov-core (1.14.0)
coverage (3.7.1)
Cython (0.18)
ipython (2.3.1)
Jinja2 (2.7.3)
jsonschema (2.4.0)
MarkupSafe (0.23)
matplotlib (1.4.2)
mistune (0.4.1)
nibabel (1.3.0)
nose (1.3.4)
nose-cov (1.6)
numpy (1.9.1)
pip (1.5.6)
Pygments (2.0.1)
pyparsing (2.0.1)
python-dateutil (1.5)
pytz (2014.9)
pyzmq (14.4.1)
scikit-learn (0.15.2)
scipy (0.14.0)
setuptools (7.0)
six (1.8.0)
tornado (4.0.2)
VTK (5.10.1)
wsgiref (0.1.2)

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

Cool this is very helpful.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

@arokem now I don't get any output from travis. Can you press Details and see if it works for you? Maybe Travis is down this time?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 21, 2014

Closing this PR. Will open two separate ones as Ariel suggested.

@arokem

This comment has been minimized.

Member

arokem commented Nov 21, 2014

OK - I will await and take a look. Would be a good idea to merge this: #473 and rebase these PRs on top of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment