Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive memory usage despite mem_mb #1259

Closed
fphsFischmeister opened this issue Aug 23, 2018 · 6 comments
Closed

Excessive memory usage despite mem_mb #1259

fphsFischmeister opened this issue Aug 23, 2018 · 6 comments
Labels

Comments

@fphsFischmeister
Copy link

fphsFischmeister commented Aug 23, 2018

Dear all,

we are currently switching to fmriprep and want to run all analyses in singularity on our local HPC. Everything works fine using smaller datasets - we had a test set of 200 volumes.
We now want to analyse a 60 subject dataset consisting of a T1w image, a fieldmap and 4 functional runs with 850 volumes each (high res. EPI with MB=4). Since we expected some memory issues we set mem_mb 50000 and nthreads 8 according to some recommendations here. Otherwise we do normal preprocessing without slicetime correction but with ica_aroma and without freesurfer recon-all.

However, we experience excessive memory usage and our jobs were permanently killed by the scheduler. We now increase the vmem assigned to that job to get things running. Doing so, we noticed that

  • on our big mem nodes having 512GB available setting vmax to 240GB results in a maximal memory usage of about 210GB.
  • on other nodes with only 170 GB available setting vmax to 170GB results in about 120GB maximum.
    All this is currently possible on certain nodes but it certainly limits the number of parallel jobs and is not an option of the future.

Thus, is there any possibility to restrict the memory resources fmriprep is using, obviously it sees everything and neglects the mem_mb setting. I further noticed from the log, that this seems to occur always then transforming bold images:

 [MultiProc] Running 1 tasks, and 12 jobs ready. Free memory (GB): 0.00/48.83, Free processors: 1/8.
                     Currently running:
                      * fmriprep_wf.single_subject_71_wf.func_preproc_ses_2_task_Task2Part2_run_02_wf.bold_reg_wf.bold_to_t1w_transform
--
 [MultiProc] Running 1 tasks, and 7 jobs ready. Free memory (GB): 0.00/48.83, Free processors: 1/8.
                     Currently running:
                      * fmriprep_wf.single_subject_71_wf.func_preproc_ses_2_task_Task2Part2_run_02_wf.ica_aroma_wf.bold_mni_trans_wf.bold_to_mni_transform
--
[MultiProc] Running 1 tasks, and 3 jobs ready. Free memory (GB): 0.00/48.83, Free processors: 1/8.
                     Currently running:
                     * fmriprep_wf.single_subject_71_wf.func_preproc_ses_2_task_Task2Part2_run_02_wf.bold_mni_trans_wf.bold_to_mni_transform

I would appreciate your feedback and help to improve the memory usage,

thanks,

Florian

@mgxd mgxd added the memory label Sep 10, 2018
@mgxd
Copy link
Collaborator

mgxd commented Sep 10, 2018

@fphsFischmeister we encountered this same issue within our lab - @effigies suggested that Nipype's new MultiProc (using concurrent.futures) may have something to do with this. To test, we used the --use-plugin argument to specify Nipype's old MultiProc and passed in the following yaml file:

plugin: LegacyMultiProc
plugin_args: {maxtasksperchild: 1, memory_gb: 50, n_procs: 4, raise_insufficient: false}

Upon rerunning, this seemed to fix the memory issue

@fphsFischmeister
Copy link
Author

Dear @mgxd, thanks for the info. Just tried your suggestion but unfortunately I only receive an

ModuleNotFoundError: No module named 'yaml'

error. Not sure what I did wrong. We are using fmriprep built within singularity 2.5.2 on the 30th of August.

Thanks again for any suggesting.

@mgxd
Copy link
Collaborator

mgxd commented Sep 10, 2018

pyyaml was missing from the dependencies until #1272 - a new image should appear fairly soon now that 1.1.6 is released.

FWIW, we worked around this by binding a local install of the pyyaml library to the miniconda environment within the container.

/path/to/local/miniconda/lib/python3.6/site-packages/yaml:/usr/local/miniconda/lib/python3.6/site-packages/yaml

@fphsFischmeister
Copy link
Author

Since I am traveling soon I possibly wait for the new image. Anyway, I will keep you posted on that issue.

Thanks, Florian

@effigies
Copy link
Member

1.1.6 is available on DockerHub and PyPI.

@fphsFischmeister
Copy link
Author

Just started the job with the latest build and it is working like a charm! Will report on memory consumption - many thank!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants