New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive memory usage despite mem_mb #1259
Comments
@fphsFischmeister we encountered this same issue within our lab - @effigies suggested that Nipype's new MultiProc (using plugin: LegacyMultiProc
plugin_args: {maxtasksperchild: 1, memory_gb: 50, n_procs: 4, raise_insufficient: false} Upon rerunning, this seemed to fix the memory issue |
Dear @mgxd, thanks for the info. Just tried your suggestion but unfortunately I only receive an
error. Not sure what I did wrong. We are using fmriprep built within singularity 2.5.2 on the 30th of August. Thanks again for any suggesting. |
FWIW, we worked around this by binding a local install of the
|
Since I am traveling soon I possibly wait for the new image. Anyway, I will keep you posted on that issue. Thanks, Florian |
1.1.6 is available on DockerHub and PyPI. |
Just started the job with the latest build and it is working like a charm! Will report on memory consumption - many thank! |
Dear all,
we are currently switching to fmriprep and want to run all analyses in singularity on our local HPC. Everything works fine using smaller datasets - we had a test set of 200 volumes.
We now want to analyse a 60 subject dataset consisting of a T1w image, a fieldmap and 4 functional runs with 850 volumes each (high res. EPI with MB=4). Since we expected some memory issues we set mem_mb 50000 and nthreads 8 according to some recommendations here. Otherwise we do normal preprocessing without slicetime correction but with ica_aroma and without freesurfer recon-all.
However, we experience excessive memory usage and our jobs were permanently killed by the scheduler. We now increase the vmem assigned to that job to get things running. Doing so, we noticed that
All this is currently possible on certain nodes but it certainly limits the number of parallel jobs and is not an option of the future.
Thus, is there any possibility to restrict the memory resources fmriprep is using, obviously it sees everything and neglects the mem_mb setting. I further noticed from the log, that this seems to occur always then transforming bold images:
I would appreciate your feedback and help to improve the memory usage,
thanks,
Florian
The text was updated successfully, but these errors were encountered: