New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Estimate S0 from data (DTI) #1036
Conversation
Needs more thinking/testing.
This is now ready for a review. I have also profiled this with HCP data. The memory usage and time it takes with these changes is essentially identical to performance of the code we have on |
I stand corrected: I reran the benchmark and it (not too suprising) takes longer with the changes. Fitting the model for the entire WM of a single HCP brain takes 1.5 minute without this change, and about 4.5 minutes with this change. So, it's a 3-fold slowdown, but it's not like it becomes prohibitively slow. [EDIT]: Still no change in memory consumption. |
One more thought: if people are worried about the extra time, we can make this an option, rather than the default. Anyone have thoughts on that? |
Hi @arokem. Can you explain what creates the change in execution time? And why is that expected? |
OK - turns out that my timing might have been slightly off. I ran a line profiler on this to see where the added time comes from. Here are the results on master:
And with this branch:
|
So it looks like it's adding about 25% to the computation time (this is a whole brain data-set by the way -- the Stanford HARDI set), and these are spent mostly in computing the ADC and in the computation of S0_hat, on line 817-819. |
Any ideas? I don't see how these calculations could be speeded up, but I might be missing something. |
@Garyfallidis: Any further thoughts about this? |
There is something which is not clear. You said that your timing is slightly off but before you said that there is 3 fold time increase and now that there is a 25% increase. This is a large difference. Can you please explain? Are we now from 300% to 25% increase? |
The two things that were different:
I can set up a program to run timing on these things with the standard lib On Mon, May 9, 2016 at 9:08 AM, Eleftherios Garyfallidis <
|
Hi, this is interesting. What we need to understand is what makes the HCP datasets slower. So, we need to run the same profiling strategy for both datasets. |
Yep. I will get back to this, and do something more comprehensive, but not On Tue, May 17, 2016 at 3:09 PM, Eleftherios Garyfallidis <
|
Sure, np. |
Hi @arokem I would like to merge this. But first I need the analysis on what creates the slowdown and hopefully a fix that resolves the problem. Are you on it? Or this PR is replaced by another one? |
Hey Eleftherios - the problem is that, even in the best case, there is no So - on further thought, and after some conversations with @RafaelNH about I propose that - unless someone objects - we close this for now. On Thu, Jul 14, 2016 at 12:28 PM, Eleftherios Garyfallidis <
|
Sorry -- I should have closed this one a while back. Closing now... |
See conversation: https://mail.python.org/pipermail/neuroimaging/2016-March/000853.html