Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust number of threads for SLR in Recobundles #1745

Merged
merged 5 commits into from Mar 1, 2019

Conversation

frheault
Copy link
Contributor

@frheault frheault commented Feb 21, 2019

Pass down the num_thread parameters and fix verbose
No change in behaviour, the default is still all CPU available

Without this option, using recobundles in a pipeline is sub-optimal

@pep8speaks
Copy link

pep8speaks commented Feb 21, 2019

Hello @frheault, Thank you for updating !

Line 98:47: E128 continuation line under-indented for visual indent
Line 99:47: E128 continuation line under-indented for visual indent
Line 100:47: E128 continuation line under-indented for visual indent
Line 101:47: E128 continuation line under-indented for visual indent

Comment last updated on February 23, 2019 at 20:58 Hours UTC

@arokem
Copy link
Contributor

arokem commented Feb 21, 2019

Looks great. Would you mind adding a test that exercises this functionality?

Thanks!

@codecov-io
Copy link

codecov-io commented Feb 22, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@a115980). Click here to learn what that means.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #1745   +/-   ##
=========================================
  Coverage          ?   84.27%           
=========================================
  Files             ?      115           
  Lines             ?    13683           
  Branches          ?     2158           
=========================================
  Hits              ?    11531           
  Misses            ?     1649           
  Partials          ?      503
Impacted Files Coverage Δ
dipy/segment/bundles.py 91.47% <100%> (ø)

@Garyfallidis Garyfallidis changed the title Adjust number of thread for SLR in Recobundles Adjust number of threads for SLR in Recobundles Feb 22, 2019
@frheault
Copy link
Contributor Author

frheault commented Feb 22, 2019

Looks great. Would you mind adding a test that exercises this functionality?

Thanks!

I think I made one that make senses, it run RB twice (one single-thread and the other multi-thread) and the output is expect to be almost equal. The optimizer does not give identical results due to the multi-threading, but the streamline are almost equal (to 4 decimals)


# check if the bundle is recognized correctly
# multi-threading prevent an exact match
for row in D:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type is D at this point? It's not an array?

Do you understand why multi-threading prevents an exact match? Is there any additional randomness introduced within the algorithm, beyond the control you have via rng?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @frheault : I'd love to be able to merge this in. Any thoughts on these questions?

Copy link
Contributor Author

@frheault frheault Feb 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Sorry I misclicked the last time and the comment was pending, my mistake)

D is an array, I used the same test "format" as every other recobundle test. I didn't want to diverge too much from the existing code. That's why I used this verification style.

The randomness is introduced from the optimizer ('L_BFGS_B' or 'Powell') from dipy.core.optimize import Optimizer which ultimately call scipy.optimize import fmin_l_bfgs_b, fmin_powell does not accept fixed initialization or seed.

rng only controls the shuffle and clustering of QBx, so the input to the optimizer is identical. The optimizer reach the same local minima, however the slight difference comes from numerical error due to different initialization for each thread.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. That all makes sense to me. The fact that scipy.optimize has randomization in it is very unfortunate, but out of our control here.

This all looks fine to me. I'm +1 for the merge here.

Does anyone else want to take a look? If so, please do so in the next couple of days. I'll merge this mid next week, if no one complains before that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I can merge if you want, or you can go ahead.

Thanks @frheault

@arokem
Copy link
Contributor

arokem commented Mar 1, 2019 via email

@jchoude jchoude merged commit 73260b6 into dipy:master Mar 1, 2019
@jchoude
Copy link
Contributor

jchoude commented Mar 1, 2019

It's a go. Thanks @frheault and @arokem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants