Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with running Fassurfer on Mac #49

Closed
Mtay316 opened this issue May 14, 2021 · 18 comments
Closed

Problem with running Fassurfer on Mac #49

Mtay316 opened this issue May 14, 2021 · 18 comments
Labels
question Further information is requested

Comments

@Mtay316
Copy link

Mtay316 commented May 14, 2021

Hi FastSurfer team,
I am a new user of your pipeline. After reading about it, I am very excited to run your pipeline on my dataset.

I am using Mac Catalina (10.15.7). I installed Docker and followed the instructions in your Docker folder. I build FastSurfer on CPU and ran :

docker run -v /Users/mtay316/Documents/Fast_surfer/my_mri_data/data:/data
-v /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output:/output
-v /Users/mtay316/Documents/Fast_surfer:/fs60
--rm --user 504 fastsurfer:cpu
--fs_license /fs60/license.txt
--t1 /data/subject_01/orig.mgz
--no_cuda
--sid subject_01 --sd /output
--parallel

The problem is after running these commands, I can see two folders (mri , scripts) are being made in my output directory but there is just one .log file in the script folder with this message:

python3.6 eval.py --in_name /data/subject_01/orig.mgz --out_name /output/subject_01/mri/aparc.DKTatlas+aseg.deep.mgz --order 1 --network_sagittal_path /fastsurfer/checkpoints/Sagittal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_axial_path /fastsurfer/checkpoints/Axial_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --network_coronal_path /fastsurfer/checkpoints/Coronal_Weights_FastSurferCNN/ckpts/Epoch_30_training_state.pkl --batch_size 8 --simple_run --no_cuda
Reading volume /data/subject_01/orig.mgz
Cuda available: False, # Available GPUS: 0, Cuda user disabled (--no_cuda flag): True, --> Using device: cpu

Can you help me to fix this problem?

Regards,
Maryam

@LeHenschel
Copy link
Member

Hey Maryam,

this is probably an issue with the memory (see issue #40). In order to run FastSurfer you need around 10 GB RAM, but Docker Desktop on MAC is set to use 2 GB runtime memory by default. You can overwrite the settings under Docker Desktop --> Preferences --> Resources --> Advanced, slide the bar under Memory to 10 GB; see: https://docs.docker.com/docker-for-mac/ for details.

Hope this helps.

Kind regards,
Leonie

@Mtay316
Copy link
Author

Mtay316 commented May 14, 2021

Thanks, Leonie for your response. You are right because I am running another program that is taking more than 99% of my CPU. I will try again running FastSurfer after this analysis is over.

Best Wishes,
Maryam

@Mtay316
Copy link
Author

Mtay316 commented May 15, 2021

I ran fastsurfer and it finished without error. The only thing is when I run "recon-all" I use -qcache at the end of the command. it makes fsaverage and the surface data for thickness, curv, sulc, area with smoothness of 0, 5, 10, 15, 20, and 25 mm FWHM.
I need these outputs for my further analysis. Is there any way to add -qcache to Fastsurfer pipeline?

Regards,
Maryam

@Mtay316 Mtay316 closed this as completed May 16, 2021
@Mtay316
Copy link
Author

Mtay316 commented May 16, 2021

Hey Maryam,

this is probably an issue with the memory (see issue #40). In order to run FastSurfer you need around 10 GB RAM, but Docker Desktop on MAC is set to use 2 GB runtime memory by default. You can overwrite the settings under Docker Desktop --> Preferences --> Resources --> Advanced, slide the bar under Memory to 10 GB; see: https://docs.docker.com/docker-for-mac/ for details.

Hope this helps.

Kind regards,
Leonie

@Mtay316 Mtay316 reopened this May 16, 2021
@Mtay316
Copy link
Author

Mtay316 commented May 16, 2021

Sorry, I just closed it by mistake. I am still waiting for your response.

@m-reuter
Copy link
Member

Hi Maryam, the recon_surf pipeline in FastSurfer does not have a -qcache flag. You have two options,

  1. you should be able to run the corresponding mris_preproc command for the smoothing level that you actually need (and the files that you actually analyze) see uncached data here https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/GroupAnalysis , or
  2. just try to run recon_all ... -qcache (without the -all flag, only -sid, maybe -sd and -qcache ) to generate the cache with recon-all after FastSurfer is finished. Haven't tested this, but don't see why it should not work.

@Mtay316
Copy link
Author

Mtay316 commented May 18, 2021

Thanks for your response. I ran recon-all with -qcache. Seems like Fastsurfer did not make any "lh.sphere" or "rh.sphere" files. That is why when I ran :
recon-all -s subject_01 -qcache

I got this error :

eading source surface reg /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output/subject_01/surf/lh.sphere.reg
error: No such file or directory
error: MRISread(/Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output/subject_01/surf/lh.sphere.reg): could not open file
error: No such file or directory
error: mri_surf2surf: could not read surface /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output/subject_01/surf/lh.sphere.reg

Does Fastsurfer make a "sphere" file? I tried it on two different subjects and I got the same results.

@m-reuter
Copy link
Member

For FastSurfer to create those files you need to add the --surfreg flag .

@m-reuter
Copy link
Member

Did this solve your issue?

@m-reuter m-reuter added the question Further information is requested label May 27, 2021
@Mtay316
Copy link
Author

Mtay316 commented May 27, 2021 via email

@m-reuter
Copy link
Member

No, usually (with --parallel and multi threading) this step only adds 30 min on our system. Sequentially it can be slower (around 2h ). You can test to create the spherical registration with FreeSurfer instead (which is basically what happens in fastsurfer). That way you do not need to re-run everything:
recon-all -s $subject -hemi $hemi -sphere -surfreg
However, I would expect that that takes similarly long on your system. The question is why is it so slow (old hardware, problematic images ...?).

@Mtay316
Copy link
Author

Mtay316 commented Jun 9, 2021

Hi,
I ran through docker because I am using MacOs Catalina (64Gb RAM). My command is:

docker run -v /Users/mtay316/Documents/Fast_surfer/my_mri_data/Conc_12:/data
-v /Users/mtay316/Documents/Fast_surfer/my_fastsurfer_analysis/output_Conc_12:/output
-v /Users/mtay316/Documents/Fast_surfer:/fs60
--rm --user 504 fastsurfer:cpu
--fs_license /fs60/license.txt
--t1 /data/orig.mgz
--no_cuda
--sid conc_12 --sd /output
--surfreg
--parallel
--threads 4

And it took about 4 hours to finish it.

Started at Tue Jun 8 22:20:20 UTC 2021
Ended at Wed Jun 9 02:46:54 UTC 2021
#@#%# recon-surf-run-time-hours 4.443
id: cannot find name for user ID 504
recon-surf.sh conc_12 finished without error at Wed Jun 9 02:46:54 UTC 2021

Is there any other way that I can make the whole process shorter?

@m-reuter
Copy link
Member

m-reuter commented Jun 9, 2021

I don't know how much of that is the segmentation and how much is the recon_surf pipeline. Can you attach the log files?
There could be two things: first, segmentation on the CPU is really slow with our large network. It is really designed for the GPU, so switching to GPU could speed up things (maybe by up to one hour). The rest however is still too slow. This could be due to image quality, or different acquisition protocol, leading to many topological errors in the surface reconstruction which takes most of the time to correct. But we would know more, when seeing the log files. They should be in the subject directory under scripts.

@Mtay316
Copy link
Author

Mtay316 commented Jun 9, 2021

Thanks for your reply. The quality of my T1-W image is all fine. It s been taken by 3T SIGNA GE machine, 64ch head coils.
Dimensions: 512 x 512 x 300
Voxel size: 0.4297 x 0.4297 x 0.5

I am attaching the log files.

mri_nu_correct.mni.log
recon-surf.log
recon-all-status.log
recon-all.log
pctsurfcon.log
ponscc.cut.log
deep-seg.log

@m-reuter
Copy link
Member

m-reuter commented Jun 9, 2021

Oh, that probably explains it. The pipeline is for 1mm images, it also worked on HCP data downsampled to 1mm but your images are very different. Still if the output looks good it may be worth waiting for it. Will take a look at the logs tomorrow.

@m-reuter
Copy link
Member

I finally looked at the log and nothing pops out immediately, yet the topology fixer and the spherical registration take 0.5 h each , for each hemisphere, so that is already 2h. You can try the --parallel flag to run surfaces in parallel instead of sequentially to speed things up more. You wrote you use --parallel above, but the logfile shows that you did not in this case.

@zswgzx
Copy link

zswgzx commented Jun 30, 2021

About the parallel option, even if --parallel was used, I noticed that it still took a Ubuntu gpu VM on Azure ~2hrs to complete. Is there any other pre-requisite packages needed for this, if I understand the latest comment right? I also used --threads=[total cpu cores]. Thanks.

@m-reuter
Copy link
Member

The first stage (image segmentation with CNN) on GPU takes only a minute. So the majority of the time gets used in the recon-surf pipeline. And that depends a lot on CPU speed, available cores, and quality of the images. So it is hard to tell why this still takes 2h. There is no need to install additional packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants