-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replaced freesurfer mri_surf2surf command with nipype's SurfaceTransform #2
Conversation
will include sval-annot soon |
i reinstalled nipype on my mac and on my linux box, and source_annot_file is available on the mac but not on linux. the only difference in setup i can discern is that the linux required me to do a sudo python setup.py install. TraitError: Cannot set the undefined 'source_annot_file' attribute of a 'SurfaceTransformInputSpec' object. |
you'll have to find out which version of nipype you are using via nipype.version or nipype.get_info() and the location and contents of the utils.py file in freesurfer. |
i'm using the newest version: In [4]: nipype.version In [3]: nipype.get_info() i have fs v5.1 installed, but i can't find a utils.py within On Tue, May 29, 2012 at 10:35 AM, Satrajit Ghosh <
|
that is not the latest version. if you are running of the latest master, then the following is the latest version.
|
odd. should i be doing something other than?: git clone git@github.com:nipy/nipype.git cheers, On Tue, May 29, 2012 at 11:26 AM, Satrajit Ghosh <
|
a clone will give you the latest version. but perhaps your environment is setup in a way that is different between the user and the superuser? just do: sudo python -c "import nipype; print nipype.version" if they are different you know something is amiss. you might then want to check which python you are using! cheers, satra |
they are both using 0.5.3 cheers, On Tue, May 29, 2012 at 11:37 AM, Satrajit Ghosh <
|
then you don't have the latest version. check: nipype/nipype/info.py in your source |
ha! i found out what my problem was. i tried running python setup.py cheers, |
but it depends on where i am: arno@boggle:~/Software/nipype$ python -c "import nipype; print nipype/init.py:53: UserWarning: Running the tests from the install arno@boggle:~/Documents/Projects/mindboggle/mindboggle$ python -c "import should i set an explicit path in bash_profile? cheers, |
that's still an incorrect installation then! or you are using two different python executables! or they have different sys.paths. |
well, at least the python executables are the same (python --version -> Python 2.7.3) |
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding.
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding.
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding.
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding. Former-commit-id: cf35a4c
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding. Former-commit-id: 977bf71
Rather than run the connect_points() on the anchor points, extract_endpoints() from the resulting skeleton, and run connect_points() on the endpoints, implement a very fast approach: 1. threshold likelihood values in a fold/sulcus 2. skeletonize(), retaining anchor points 3. extract endpoints of skeleton that are also anchor points 4. run connect_points() on endpoints and skeletonize results Preliminary anchor point skeletons and endpoint skeletons are the same, but retaining #2 for the following reasons (from yrjo): 1. The likelihood function can have flat areas, so that it doesn't reduce to a single point when thresholding 2. The likelihood function can have areas of low values where the fundus is 'cut' 3. The likelihood function is about to change with the learning approach, and the endpoint selection has to be able to perform with different kinds of likelihood functions, even if they don't produce nice skeletons from thresholding. Former-commit-id: ceaef26
Before merging, for this to work, nipype need to include an --sval-annot argument for accepting a source annotation file (which should replace the "source_file") and --tval argument for accepting a target (output) annotation file.