Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replaced freesurfer mri_surf2surf command with nipype's SurfaceTransform #2

Merged
merged 1 commit into from
May 27, 2012

Conversation

binarybottle
Copy link
Member

Before merging, for this to work, nipype need to include an --sval-annot argument for accepting a source annotation file (which should replace the "source_file") and --tval argument for accepting a target (output) annotation file.

@satra
Copy link
Member

satra commented May 27, 2012

will include sval-annot soon

@binarybottle binarybottle merged commit 21ece6d into master May 27, 2012
@binarybottle
Copy link
Member Author

i reinstalled nipype on my mac and on my linux box, and source_annot_file is available on the mac but not on linux. the only difference in setup i can discern is that the linux required me to do a sudo python setup.py install.

TraitError: Cannot set the undefined 'source_annot_file' attribute of a 'SurfaceTransformInputSpec' object.

@satra
Copy link
Member

satra commented May 29, 2012

you'll have to find out which version of nipype you are using via nipype.version or nipype.get_info() and the location and contents of the utils.py file in freesurfer.

@binarybottle
Copy link
Member Author

i'm using the newest version:

In [4]: nipype.version
Out[4]: '0.5.3'

In [3]: nipype.get_info()
Out[3]:
{'commit_hash': '8b8ba0e',
'commit_source': 'archive substitution',
'networkx_version': '1.6',
'nibabel_version': '1.1.0',
'numpy_version': '1.6.1',
'pkg_path': '/usr/lib/pymodules/python2.7/nipype',
'scipy_version': '0.9.0',
'sys_executable': '/usr/bin/python',
'sys_platform': 'linux2',
'sys_version': '2.7.3 (default, Apr 20 2012, 22:39:59) \n[GCC 4.6.3]',
'traits_version': '4.0.0'}

i have fs v5.1 installed, but i can't find a utils.py within
/usr/local/freesurfer/

On Tue, May 29, 2012 at 10:35 AM, Satrajit Ghosh <
reply@reply.github.com

wrote:

you'll have to find out which version of nipype you are using via
nipype.version or nipype.get_info() and the location and contents of
the utils.py file in freesurfer.


Reply to this email directly or view it on GitHub:
#2 (comment)

@satra
Copy link
Member

satra commented May 29, 2012

that is not the latest version. if you are running of the latest master, then the following is the latest version.


In [1]: import nipype
Installed version: 0.6.0.g83c33ea-dev
Current stable version: 0.5.3
Current dev version: 0.6.0.g83c33ea-dev

In [2]: nipype.get_info()
Out[2]: 
{'commit_hash': '83c33ea',
 'commit_source': 'repository',
 'networkx_version': '1.6',
 'nibabel_version': '1.3.0dev',
 'numpy_version': '1.6.1',
 'pkg_path': '/software/nipy-repo/nipype/nipype',
 'scipy_version': '0.10.0',
 'sys_executable': '/software/virtualenvs.EPD/7.2/devpype/bin/python',
 'sys_platform': 'darwin',
 'sys_version': '2.7.2 |CUSTOM| (default, Sep  7 2011, 16:31:15) \n[GCC 4.0.1 (Apple Inc. build 5493)]',
 'traits_version': '4.1.0'}

In [3]: nipype.__version__
Out[3]: '0.6.0.g83c33ea-dev'

@binarybottle
Copy link
Member Author

odd. should i be doing something other than?:

git clone git@github.com:nipy/nipype.git

cheers,
@rno

On Tue, May 29, 2012 at 11:26 AM, Satrajit Ghosh <
reply@reply.github.com

wrote:

that is not the latest version. if you are running of the latest master,
then the following is the latest version.


In [1]: import nipype
Installed version: 0.6.0.g83c33ea-dev
Current stable version: 0.5.3
Current dev version: 0.6.0.g83c33ea-dev

In [2]: nipype.get_info()
Out[2]:
{'commit_hash': '83c33ea',
 'commit_source': 'repository',
 'networkx_version': '1.6',
 'nibabel_version': '1.3.0dev',
 'numpy_version': '1.6.1',
 'pkg_path': '/software/nipy-repo/nipype/nipype',
 'scipy_version': '0.10.0',
 'sys_executable': '/software/virtualenvs.EPD/7.2/devpype/bin/python',
 'sys_platform': 'darwin',
 'sys_version': '2.7.2 |CUSTOM| (default, Sep  7 2011, 16:31:15) \n[GCC
4.0.1 (Apple Inc. build 5493)]',
 'traits_version': '4.1.0'}

In [3]: nipype.__version__
Out[3]: '0.6.0.g83c33ea-dev'

Reply to this email directly or view it on GitHub:
#2 (comment)

@satra
Copy link
Member

satra commented May 29, 2012

a clone will give you the latest version. but perhaps your environment is setup in a way that is different between the user and the superuser?

just do:

sudo python -c "import nipype; print nipype.version"
and
python -c "import nipype; print nipype.version"

if they are different you know something is amiss. you might then want to check which python you are using!

cheers,

satra

@binarybottle
Copy link
Member Author

they are both using 0.5.3

cheers,
@rno

On Tue, May 29, 2012 at 11:37 AM, Satrajit Ghosh <
reply@reply.github.com

wrote:

a clone will give you the latest version. but perhaps your environment is
setup in a way that is different between the user and the superuser?

just do:

sudo python -c "import nipype; print nipype.version"
and
python -c "import nipype; print nipype.version"

if they are different you know something is amiss. you might then want to
check which python you are using!

cheers,

satra


Reply to this email directly or view it on GitHub:
#2 (comment)

@satra
Copy link
Member

satra commented May 29, 2012

then you don't have the latest version. check: nipype/nipype/info.py in your source

@binarybottle
Copy link
Member Author

ha! i found out what my problem was. i tried running python setup.py
install first w/o, then w/ "sudo" -- now i've got 0.6.0...! thank you,
satra!

cheers,
@rno

@binarybottle
Copy link
Member Author

but it depends on where i am:

arno@boggle:~/Software/nipype$ python -c "import nipype; print
nipype.version"

nipype/init.py:53: UserWarning: Running the tests from the install
directory may trigger some failures
warnings.warn('Running the tests from the install directory may '
0.6.0.g83c33ea-dev

arno@boggle:~/Documents/Projects/mindboggle/mindboggle$ python -c "import
nipype; print nipype.version"
0.5.3

should i set an explicit path in bash_profile?

cheers,
@rno

@satra
Copy link
Member

satra commented May 29, 2012

that's still an incorrect installation then! or you are using two different python executables! or they have different sys.paths.

@binarybottle
Copy link
Member Author

well, at least the python executables are the same (python --version -> Python 2.7.3)

binarybottle added a commit that referenced this pull request Oct 11, 2012
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.
binarybottle added a commit that referenced this pull request Jul 3, 2013
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.
binarybottle added a commit that referenced this pull request Oct 4, 2014
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.
binarybottle added a commit that referenced this pull request Oct 4, 2014
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.


Former-commit-id: cf35a4c
binarybottle added a commit that referenced this pull request Oct 4, 2014
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.


Former-commit-id: 977bf71
binarybottle added a commit that referenced this pull request Mar 1, 2015
Rather than run the connect_points() on the anchor points, extract_endpoints()
from the resulting skeleton, and run connect_points() on the endpoints,
implement a very fast approach:

1. threshold likelihood values in a fold/sulcus
2. skeletonize(), retaining anchor points
3. extract endpoints of skeleton that are also anchor points
4. run connect_points() on endpoints and skeletonize results

Preliminary anchor point skeletons and endpoint skeletons are the same,
but retaining #2 for the following reasons (from yrjo):
1. The likelihood function can have flat areas, so that
it doesn't reduce to a single point when thresholding
2. The
likelihood function can have areas of low values where the fundus is
'cut'
3. The likelihood function is about to change with the
learning approach, and the endpoint selection has to be able to
perform with different kinds of likelihood functions, even if they
don't produce nice skeletons from thresholding.


Former-commit-id: ceaef26
binarybottle pushed a commit that referenced this pull request Mar 30, 2017
mgxd pushed a commit to mgxd/mindboggle that referenced this pull request Oct 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants