-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with running Fassurfer on Mac #49
Comments
Hey Maryam, this is probably an issue with the memory (see issue #40). In order to run FastSurfer you need around 10 GB RAM, but Docker Desktop on MAC is set to use 2 GB runtime memory by default. You can overwrite the settings under Docker Desktop --> Preferences --> Resources --> Advanced, slide the bar under Memory to 10 GB; see: https://docs.docker.com/docker-for-mac/ for details. Hope this helps. Kind regards, |
Thanks, Leonie for your response. You are right because I am running another program that is taking more than 99% of my CPU. I will try again running FastSurfer after this analysis is over. Best Wishes, |
I ran fastsurfer and it finished without error. The only thing is when I run "recon-all" I use -qcache at the end of the command. it makes fsaverage and the surface data for thickness, curv, sulc, area with smoothness of 0, 5, 10, 15, 20, and 25 mm FWHM. Regards, |
|
Sorry, I just closed it by mistake. I am still waiting for your response. |
Hi Maryam, the recon_surf pipeline in FastSurfer does not have a -qcache flag. You have two options,
|
Thanks for your response. I ran recon-all with -qcache. Seems like Fastsurfer did not make any "lh.sphere" or "rh.sphere" files. That is why when I ran : I got this error : eading source surface reg /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output/subject_01/surf/lh.sphere.reg Does Fastsurfer make a "sphere" file? I tried it on two different subjects and I got the same results. |
For FastSurfer to create those files you need to add the --surfreg flag . |
Did this solve your issue? |
Hi Martin,
Yes it worked and increased the time of processing from 1 h to 5 h. Is that
what you also expect to happen?
On Thu, May 27, 2021 at 10:36 PM Martin Reuter ***@***.***> wrote:
Did this solve your issue?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#49 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMJCYHJXBR3XEQ7M62AVNW3TPYOB5ANCNFSM443QLA4Q>
.
--
*Maryam Tayebi*
PhD Candidate
Auckland Bioengineering Institute, University of Auckland, Auckland, New
Zealand
|
No, usually (with --parallel and multi threading) this step only adds 30 min on our system. Sequentially it can be slower (around 2h ). You can test to create the spherical registration with FreeSurfer instead (which is basically what happens in fastsurfer). That way you do not need to re-run everything: |
Hi, docker run -v /Users/mtay316/Documents/Fast_surfer/my_mri_data/Conc_12:/data And it took about 4 hours to finish it. Started at Tue Jun 8 22:20:20 UTC 2021 Is there any other way that I can make the whole process shorter? |
I don't know how much of that is the segmentation and how much is the recon_surf pipeline. Can you attach the log files? |
Thanks for your reply. The quality of my T1-W image is all fine. It s been taken by 3T SIGNA GE machine, 64ch head coils. I am attaching the log files. mri_nu_correct.mni.log |
Oh, that probably explains it. The pipeline is for 1mm images, it also worked on HCP data downsampled to 1mm but your images are very different. Still if the output looks good it may be worth waiting for it. Will take a look at the logs tomorrow. |
I finally looked at the log and nothing pops out immediately, yet the topology fixer and the spherical registration take 0.5 h each , for each hemisphere, so that is already 2h. You can try the --parallel flag to run surfaces in parallel instead of sequentially to speed things up more. You wrote you use --parallel above, but the logfile shows that you did not in this case. |
About the parallel option, even if --parallel was used, I noticed that it still took a Ubuntu gpu VM on Azure ~2hrs to complete. Is there any other pre-requisite packages needed for this, if I understand the latest comment right? I also used --threads=[total cpu cores]. Thanks. |
The first stage (image segmentation with CNN) on GPU takes only a minute. So the majority of the time gets used in the recon-surf pipeline. And that depends a lot on CPU speed, available cores, and quality of the images. So it is hard to tell why this still takes 2h. There is no need to install additional packages. |
Hi FastSurfer team,
I am a new user of your pipeline. After reading about it, I am very excited to run your pipeline on my dataset.
I am using Mac Catalina (10.15.7). I installed Docker and followed the instructions in your Docker folder. I build FastSurfer on CPU and ran :
docker run -v /Users/mtay316/Documents/Fast_surfer/my_mri_data/data:/data
-v /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output:/output
-v /Users/mtay316/Documents/Fast_surfer:/fs60
--rm --user 504 fastsurfer:cpu
--fs_license /fs60/license.txt
--t1 /data/subject_01/orig.mgz
--no_cuda
--sid subject_01 --sd /output
--parallel
The problem is after running these commands, I can see two folders (mri , scripts) are being made in my output directory but there is just one .log file in the script folder with this message:
python3.6 eval.py --in_name /data/subject_01/orig.mgz --out_name /output/subject_01/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path /fastsurfer/checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path /fastsurfer/checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path /fastsurfer/checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run --no_cuda
Reading volume /data/subject_01/orig.mgz
Cuda available: False, # Available GPUS: 0, Cuda user disabled (--no_cuda flag): True, --> Using device: cpu
Can you help me to fix this problem?
Regards,
Maryam
The text was updated successfully, but these errors were encountered: