New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LiFE #419
LiFE #419
Conversation
Let me know when this will be ready to look at. Thx! |
Almost ready, I think. I also need to make a compelling example. For this, do we have the data from which track300.trk were made? Or should I import one of the tracking examples and go from there? |
Nope, you will need to upload new data and bundles/streamlines for your tutorial. |
Maybe create some streamlines from the stanford data and then select a bundle that you like? Can you put both the streamlines and any created bundles online (with the fetchers etc.)? That could be useful for other projects too. |
Got it. Working on it. I am thinking of importing from one of the other On Sat, Oct 4, 2014 at 1:05 PM, Eleftherios Garyfallidis <
|
It all depends how fast you want your tutorial to run. If you want to run quickly then it will be easier if you can fetch the streamlines that you need. |
tracking_eudx_odf runs real fast, especially if you've already run the DTI On Sat, Oct 4, 2014 at 1:15 PM, Eleftherios Garyfallidis <
|
ok |
Alright - open season! This PR is now open for comments! On Sat, Oct 4, 2014 at 1:31 PM, Eleftherios Garyfallidis <
|
43d5524
to
3ab746a
Compare
92cf409
to
10dcf5f
Compare
Hi @arokem, when I run the life tutorial I am getting this error ValueError Traceback (most recent call last) /home/eleftherios/Devel/dipy/doc/examples/life.py in () /usr/local/lib/python2.7/dist-packages/scipy/sparse/compressed.pyc in sum(self, axis) /usr/local/lib/python2.7/dist-packages/scipy/sparse/base.pyc in sum(self, axis) ValueError: axis out of bounds |
Do all the tests run fine for you? On Sun, Oct 12, 2014 at 5:53 PM, Eleftherios Garyfallidis <
|
Yes, they do actually. |
Numpy 1.8.2 and scipy 0.13.3 |
OK - this part of the example is not that important. I've actually been On Sun, Oct 12, 2014 at 6:01 PM, Eleftherios Garyfallidis <
|
import scipy.linalg as la | ||
|
||
from dipy.reconst.base import ReconstModel, ReconstFit | ||
from dipy.core.onetime import ResetMixin, auto_attr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both imported but unused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for noticing. Fixed!
Hi Ariel, can something like this help: http://stackoverflow.com/questions/11784329/python-memory-usage-of-numpy-arrays |
Thanks - that's quite useful. I think that the tricky bit is profiling the This should also be helpful: https://pypi.python.org/pypi/memory_profiler On Mon, Dec 8, 2014 at 12:55 PM, Franco Pestilli notifications@github.com
|
OK - I lied, here's what I did on a bus ride through the East Quebec wilderness: http://nbviewer.ipython.org/gist/arokem/1f3529f967f334af74b7 The plot at the bottom is a worst-case scenario: a linear extrapolation from the data, based on the approximately 10k streamlines that I could easily run this for. Let me know whether you think this is an appropriate analysis. |
Oh, the dangers of extrapolating from limited data! Here's an updated analysis of this: http://nbviewer.ipython.org/gist/arokem/f91d436af3f0d3084af4 Looks like the memory usage is bounded by a fixed factor of the data size. In this case, no more than 4.5 GB, but it might grow larger, if the ROI within which the streamlines are defined grows larger. |
Okay great. Now we have a better idea of what we can improve in the future. So, for example maybe those maps that take space could be saved as memmaps if they don't reduce performance etc. And... |
Congrats! |
Sweet! On Tue, Dec 9, 2014 at 5:46 PM, Eleftherios Garyfallidis <
|
This is just to let you know that I am working on an implementation of the Linear Fascicle Evaluation algorithm, described in our recent paper: http://www.nature.com/nmeth/journal/vaop/ncurrent/full/nmeth.3098.html
There might be an opportunity to speed things up, using cython. In particular, I am thinking about ways to speed up both sl_signal and voxel2fiber, and any suggestions on that are most welcome.