Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.4.1 templateflow proxy error #1699

Closed
oricon opened this issue Jul 11, 2019 · 40 comments
Closed

1.4.1 templateflow proxy error #1699

oricon opened this issue Jul 11, 2019 · 40 comments

Comments

@oricon
Copy link

oricon commented Jul 11, 2019

I'm getting a proxy error with a 1.4.1 singularity build. It looks like OASIS ANTs template is being downloaded when fmriprep starts and its being blocked. Is there a workaround for this where the templates don't need to download? Or would I need to have the sysadmin unblock https://templateflow.s3.amazonaws.com from firewalls? I'm not sure if its a firewall issue or just a policy of not allowing tunnels from compute nodes. I also have Sentry errors with reports being blocked, but the --notrack option eliminates these errors. The error log is pasted below.

Thanks

Downloading https://templateflow.s3.amazonaws.com/tpl-OASIS30ANTs/tpl-OASIS30ANTs_res-01_T1w.nii.gz
Process Process-2:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 594, in urlopen
self._prepare_proxy(conn)
File "/usr/local/miniconda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 815, in _prepare_proxy
conn.connect()
File "/usr/local/miniconda/lib/python3.7/site-packages/urllib3/connection.py", line 324, in connect
self._tunnel()
File "/usr/local/miniconda/lib/python3.7/http/client.py", line 911, in _tunnel
message.strip()))
OSError: Tunnel connection failed: 403 Forbidden

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/adapters.py", line 445, in send
timeout=timeout
File "/usr/local/miniconda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/miniconda/lib/python3.7/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='templateflow.s3.amazonaws.com', port=443): Max retries exceeded with url: /tpl-OASIS30ANTs/tpl-OASIS30ANTs_res-01_T1w.nii.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 610, in build_workflow
work_dir=str(work_dir),
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 259, in init_fmriprep_wf
use_syn=use_syn,
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 551, in init_single_subject_wf
skull_strip_template=skull_strip_template,
File "/usr/local/miniconda/lib/python3.7/site-packages/smriprep/workflows/anatomical.py", line 230, in init_anat_preproc_wf
normalization_quality='precise' if not debug else 'testing')
File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/anat/ants.py", line 183, in init_brain_extraction_wf
template_spec=template_spec)
File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/utils/misc.py", line 50, in get_template_specs
tpl_target_path = get_template(in_template, **template_spec)
File "/usr/local/miniconda/lib/python3.7/site-packages/templateflow/api.py", line 39, in get
_s3_get(filepath)
File "/usr/local/miniconda/lib/python3.7/site-packages/templateflow/api.py", line 130, in _s3_get
r = requests.get(url, stream=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/requests/adapters.py", line 507, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='templateflow.s3.amazonaws.com', port=443): Max retries exceeded with url: /tpl-OASIS30ANTs/tpl-OASIS30ANTs_res-01_T1w.nii.gz (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))

@oesteban
Copy link
Member

Or would I need to have the sysadmin unblock https://templateflow.s3.amazonaws.com from firewalls?

Correct. Your firewall is preventing templates from being pulled down.

@a3sha2
Copy link
Member

a3sha2 commented Jul 12, 2019

I am having the same problem but with OSError: [Errno 28] No space left on device

Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 316, in _send_procs_to_workers self.procs[jobid].run(updatehash=updatehash) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 472, in run result = self._run_interface(execute=True) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 563, in _run_interface return self._run_command(execute) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 643, in _run_command result = self._interface.run(cwd=outdir) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run runtime = self._run_interface(runtime) File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface out = function_handle(**args) File "<string>", line 5, in _get_template File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/utils/misc.py", line 50, in get_template_specs tpl_target_path = get_template(in_template, **template_spec) File "/usr/local/miniconda/lib/python3.7/site-packages/templateflow/api.py", line 39, in get _s3_get(filepath) File "/usr/local/miniconda/lib/python3.7/site-packages/templateflow/api.py", line 144, in _s3_get f.write(data) OSError: [Errno 28] No space left on device

@oesteban
Copy link
Member

And I assume you checked you can actually write to

python -c "from templateflow.conf import TF_HOME; print(TF_HOME)"

?

@a3sha2
Copy link
Member

a3sha2 commented Jul 12, 2019

I am using singularity image

@oesteban
Copy link
Member

Okay, then that is potentially a different issue. What is fMRIPrep version?

@a3sha2
Copy link
Member

a3sha2 commented Jul 12, 2019

the current version now, 1.4.1

@oesteban
Copy link
Member

And your full command line?

@a3sha2
Copy link
Member

a3sha2 commented Jul 13, 2019

I was trying to run with different output spaces

singularity run -e -B /data/joy/BBL/applications/bids_apps/templateflow/:/opt/templateflow -B /data:/home/aadebimpe/data /data/joy/BBL/applications/bids_apps/fmriprep1.4.1.simg /home/aadebimpe//data/jux/BBL/projects/testACT/data/BIDS /home/aadebimpe//data/jux/BBL/projects/testACT/data/BIDS/fmriprep1.41 participant --participant_label 080557 --no-freesurfer -w /home/aadebimpe//data/jux/BBL/projects/testACT/data/BIDS/fmriprep1.41/wkdir --skip_bids_validation --force-bbr --fs-license-file license.txt --output-spaces MNI152Lin,MNI152NLin2009cAsym,MNI152NLin6Asym,MNI152NLin6Sym,NKI,OASIS30ANTs,PNC

@oesteban
Copy link
Member

@oesteban
Copy link
Member

@marcelzwiers
Copy link

I am getting the same error as @a3sha2. After reading the possible solutions (https://neurostars.org/t/antsbrainextraction-failure-freesurfer-does-fine/3949/32 -> export SINGULARITYENV_TEMPLATEFLOW_HOME=[..]) it is still unclear to me how to use this approach when running multiple singularity instances in parallel. Would that not give me concurrency problems when they al start downloading the same template? Do I have to prepare all the templates in advance?

@marcelzwiers
Copy link

marcelzwiers commented Sep 3, 2019

This is what I see (e.g. many templates size zero):

singularity shell /opt/fmriprep/1.4.1/fmriprep-1.4.1.simg 
Singularity: Invoking an interactive shell within container...

Singularity fmriprep-1.4.1.simg:/home/mrphys/marzwi> echo $HOME
/home/fmriprep
Singularity fmriprep-1.4.1.simg:/home/mrphys/marzwi> ls -al $HOME/.cache/templateflow/tpl-OASIS30ANTs/
total 57544
drwxrwxrwx  2 root root      972 Aug  5 13:56 .
drwxrwxrwx 13 root root      295 Jul  9 13:37 ..
-rw-rw-rw-  1 root root       39 Jun  8 12:48 CHANGES
-rw-rw-rw-  1 root root      744 Jun  8 12:48 template_description.json
-rw-rw-rw-  1 root root      537 Jun  8 12:48 template_sample.tsv
-rw-rw-rw-  1 root root 32363109 Jun  8 12:48 tpl-OASIS30ANTs_res-01_T1w.nii.gz
-rw-rw-rw-  1 root root   292399 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-4_dseg.nii.gz
-rw-rw-rw-  1 root root      164 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-4_dseg.tsv
-rw-rw-rw-  1 root root   303542 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-6_dseg.nii.gz
-rw-rw-rw-  1 root root      205 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-6_dseg.tsv
-rw-rw-rw-  1 root root   264866 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-BrainCerebellumExtraction_mask.nii.gz
-rw-rw-rw-  1 root root   255798 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-BrainCerebellumRegistration_mask.nii.gz
-rw-rw-rw-  1 root root  5082326 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-brain_T1w.nii.gz
-rw-rw-rw-  1 root root   182192 Jun  8 12:48 tpl-OASIS30ANTs_res-01_desc-brain_mask.nii.gz
-rw-rw-rw-  1 root root   446641 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-BS_probseg.nii.gz
-rw-rw-rw-  1 root root   887554 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-CBM_probseg.nii.gz
-rw-rw-rw-  1 root root  5260423 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-CGM_probseg.nii.gz
-rw-rw-rw-  1 root root  6205417 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-CSF_probseg.nii.gz
-rw-rw-rw-  1 root root   726641 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-SCGM_probseg.nii.gz
-rw-rw-rw-  1 root root  4057922 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-WM_probseg.nii.gz
-rw-rw-rw-  1 root root  2589048 Jun  8 12:48 tpl-OASIS30ANTs_res-01_label-brain_probseg.nii.gz
Singularity fmriprep-1.4.1.simg:/home/mrphys/marzwi> find $HOME/.cache/templateflow -type f -size 0 | wc -l
386

@oricon
Copy link
Author

oricon commented Sep 3, 2019 via email

@a3sha2
Copy link
Member

a3sha2 commented Sep 3, 2019

@marcelzwiers
this is what i did and working for 1.4.1 to current fmriprep.

export SINGULARITYENV_TEMPLATEFLOW_HOME=/home/user/templateflow
then

 fmriprep1.4.1.simg  python -c  “from templateflow import api; print(api.get(‘MNI152Lin’,‘MNI152NLin2009cAsym’,‘MNI152NLin6Asym’,‘MNI152NLin6Sym’,‘PNC’,resolution=2))”  ```


@a3sha2
Copy link
Member

a3sha2 commented Sep 3, 2019

then
then: note I bind the singularity at /data

singularity run -e -B /home/user/.cache/templateflow:/home/user/templateflow -B /data:/home/user/data fmriprep1.4.1.simg .........

@oesteban
Copy link
Member

oesteban commented Sep 3, 2019

@marcelzwiers did @a3sha2's suggestions work for you?

@marcelzwiers
Copy link

Maybe it could work for me, but it is a major pain for our center. I maintain a fmriprep module on our central storage that hundreds of researchers have access to. I cannot tell all of them to follow a3sha2 suggestion for every fmriprep run they want to do (people here will just stick to using the old version). Is there a plan to solve this issue for future fmriprep versions?

@a3sha2
Copy link
Member

a3sha2 commented Sep 6, 2019

it will work, we also use it like that on our cluster at Penn @marcelzwiers

@effigies
Copy link
Member

effigies commented Sep 6, 2019

It should also be possible to set up a single cache for the entire cluster and include the environment variable in your module so that the templates only need to be downloaded once, and shared.

@oesteban correct me if I'm wrong, but as long as you're using a single version of fMRIPrep, there should be no chance of needing to fetch new templates, so you could have a many-to-one relationship of fMRIPrep versions to templateflow caches.

@marcelzwiers
Copy link

@effigies It is a bit confusing to me. So what I understood is that there is an incomplete .cache in the 1.4.1 singularity container (it is unclear to me why all the standard templates are not just simply put there), and I should manually create a single read-only(?) template cache folder on central storage that serves the different fmriprep versions on our cluster.

The alternative is that every user should do what @a3sha2 does (i.e. first run the api.get command to get the templates they need) in their own home-directory and I make the settings correct for them (i.e. do the binding) in the frmiprep common.tcl module file.

Both options seem like a major step down for us, compared to the previous fmriprep versions

@effigies
Copy link
Member

effigies commented Sep 6, 2019

TBH it's not really clear to me why these aren't showing up on Singularity images. They should be present in the Docker images, and Singularity should replicate that. But I believe Oscar has looked into this more. Possibly we're downloading the templates into a directory that doesn't get searched, when Singularity pulls in most of the host environment?

@effigies
Copy link
Member

effigies commented Sep 6, 2019

Ah, yes:

https://github.com/poldracklab/fmriprep/blob/e8e740411d7c83c4af4526f82e862ae9363ab055/Dockerfile#L164-L170

We're pulling them into $HOME for some reason, so they naturally won't exist for your user. What if you just set:

export SINGULARITYENV_TEMPLATEFLOW_HOME=/home/fmriprep/.cache/templateflow

And in future versions, we can set TEMPLATEFLOW_HOME=/home/fmriprep/.cache/templateflow inside the container.

@marcelzwiers
Copy link

Thanks @effigies I will try that first

@marcelzwiers
Copy link

NB, there are many standard templates of size zero in the singularity container, is that different in the docker? See e.g.:

Singularity fmriprep-1.4.1.simg:/home/mrphys/marzwi> ls -al /home/fmriprep/.cache/templateflow/tpl-MNI152Lin/
total 3
drwxrwxrwx  3 root root  539 Aug  5 13:56 .
drwxrwxrwx 13 root root  295 Jul  9 13:37 ..
-rw-rw-rw-  1 root root  196 Jun  8 12:48 CHANGES
-rw-rw-rw-  1 root root  678 Jun  8 12:48 LICENSE
drwxrwxrwx  2 root root   73 Aug  5 13:56 scripts
-rw-rw-rw-  1 root root 1248 Jun  8 12:48 template_description.json
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-01_PD.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-01_T1w.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-01_T2w.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-01_desc-brain_mask.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-01_desc-head_mask.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-02_PD.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-02_T1w.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-02_T2w.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-02_desc-brain_mask.nii.gz
-rw-rw-rw-  1 root root    0 Jun  8 12:48 tpl-MNI152Lin_res-02_desc-head_mask.nii.gz

@effigies
Copy link
Member

effigies commented Sep 6, 2019

That's the same in the Docker. The Docker image only provides the three we use by default, but the templateflow package is intended to aggregate a lot of of templates, not just as a component of fMRIPrep.

So you will run into issues if people want to use templates besides these, at which point being able to provide alternative locations for the cache will be useful.

@marcelzwiers
Copy link

marcelzwiers commented Sep 6, 2019

Maybe it would be good to add that to the docs, that these are the fmriprep standard templates... It now says:

--output-spaces

Standard and non-standard spaces to resample anatomical and functional images to. Standard spaces may be specified by the form [:res-][:cohort-][...], where is a keyword (valid keywords: “MNI152Lin”, “MNI152NLin2009cAsym”, “MNI152NLin6Asym”, “MNI152NLin6Sym”, “MNIInfant”, “MNIPediatricAsym”, “NKI”, “OASIS30ANTs”, “PNC”, “fsLR”, “fsaverage”) or path pointing to a user-supplied template, and may be followed by optional, colon-separated parameters. Non-standard spaces (valid keywords: anat, T1w, run, func, sbref, fsnative) imply specific orientations and sampling grids. Important to note, the res-* modifier does not define the resolution used for the spatial normalization

@oesteban
Copy link
Member

oesteban commented Sep 6, 2019

Some notes:

  • We used to define $TEMPLATEFLOW_HOME and point it to $HOME/.cache/templateflow in the container, but this is problematic with Singularity because once set in the image, then it cannot be overwritten via export SINGULARITYENV_TEMPLATEFLOW_HOME.

  • We redefine $HOME to point to /home/fmriprep in the image (this is for Docker convenience). Most of Singularity installations will automatically bind /home into /home. That shadows /home/fmriprep. However, some installations do not reset $HOME to the user's home directory. That makes TemplateFlow keep using /home/fmriprep which in the best case scenario is read-only.

It should also be possible to set up a single cache for the entire cluster and include the environment variable in your module so that the templates only need to be downloaded once, and shared.

Correct. The only caveat is that the folder bound must exist prior to binding within the singularity image. Perhaps we should add a mkdir /opts/templateflow in the Dockerfile to provide it as standard bind point. Then, you'd do:

export $SINGULARITYENV_TEMPLATEFLOW_HOME=/opt/templateflow
singularity run -B /shared/folder/in/hpc:/opt/templateflow ...

@oesteban correct me if I'm wrong, but as long as you're using a single version of fMRIPrep, there should be no chance of needing to fetch new templates, so you could have a many-to-one relationship of fMRIPrep versions to templateflow caches.

If you play with --output-spaces you likely will end up fetching some template not available by default.

@marcelzwiers
Both options seem like a major step down for us, compared to the previous fmriprep versions

I agree with this, but on the other end of the rope there are people willing to work with pediatric templates or nonhuman templates who badly needed the flexibility. We are completely open to implementing any ideas that make both types of users happy.

@marcelzwiers
Copy link

marcelzwiers commented Sep 6, 2019

So you will run into issues if people want to use templates besides these, at which point being able to provide alternative locations for the cache will be useful.

Can't the container be made such they they then download these templates to a scratch space inside the container? Or to their home, working-directory or any user-specified directory?

@oesteban
Copy link
Member

oesteban commented Sep 6, 2019

Can't the container be made such they they then download these templates to a scratch space inside the container?

This is the default for Docker, however you need elevated privileges to run Singularity with write permissions.

@marcelzwiers
Copy link

I personally don't believe there is much added value to using non-standard templates, especially given the brilliant job ANTs is doing. So we were just trying to use a standard template, but ran into these issues...

@oesteban
Copy link
Member

oesteban commented Sep 6, 2019

So we were just trying to use a standard template, but ran into these issues...

As I said, we are very happy to work towards covering all use-cases and at the same time minimize the friction.

@marcelzwiers
Copy link

Why not use the working-directory as a scratch space instead of /home/fmriprep/.cache/templateflow?

@oesteban
Copy link
Member

oesteban commented Sep 6, 2019

How would you know that path to pre-fetch templates?

Conversely, nothing stops you from setting:

export SINGULARITYENV_TEMPLATEFLOW_HOME=$PWD/templateflow

We need to consider here something that works well for Docker users too. Maybe, if we take steps towards building Singularity images separately from Docker images then we will have space for these tailored tweaks.

@marcelzwiers
Copy link

How would you know that path to pre-fetch templates?

I clearly don't understand the problem-space you are dealing with because I don't understand your answer :-). Anyhow, thanks very much for your help, we are already very happy with default templates that work (which I'm testing right now)

@marcelzwiers
Copy link

p.s. with working directory I was referring to [-w WORK_DIR], not the current/parent working directory

@oesteban
Copy link
Member

oesteban commented Sep 6, 2019

I clearly don't understand the problem-space you are dealing with because I don't understand your answer

As you correctly assessed, the image is built with only a subset of TemplateFlow objects downloaded. The rest of them are just a stub of zero-sized files that are dynamically downloaded and replaced the first time you try to use them via templateflow.

Therefore, I'm referring to the problem of downloading info to a path that is unknown at image build time. For your use case, if you are to rewrite the default -w then you would:

export SINGULARITYENV_TEMPLATEFLOW_HOME=<WORK_DIR given to -w or its parent>/templateflow

before running singularity.

The first time you do this, it would pull down the templates required by your --output-spaces and --skull-strip-template. Subsequent calls will reuse downloaded data, if the working directory is still there.

@marcelzwiers
Copy link

marcelzwiers commented Sep 8, 2019

export SINGULARITYENV_TEMPLATEFLOW_HOME=/home/fmriprep/.cache/templateflow works well for the standard templates. I thought about it a little more, to allow for non-standard templates, and decided to copy the templateflow directory from the container to a directory that is writable for all users and set that as the SINGULARITYENV_TEMPLATEFLOW_HOME. There may be some concurrency problems (two users writing the same template) or even security problems (as any of our users can put any file there), but I think they will be minor or never happen in the real world. Thanks again for your help, I think fmriprep is a great piece of software!

@oesteban
Copy link
Member

oesteban commented Oct 2, 2019

Hi @marcelzwiers, could you check whether the --home argument of Singularity works for you? I am writing extended documentation in #1801 (please chime in to make sure it is correct). The problem we dealt with here might be completely covered in this new section - https://4189-53608443-gh.circle-artifacts.com/0/tmp/src/fmriprep/docs/_build/html/singularity.html#templateflow-and-singularity

Further comments and explanations can be found in #1778 (comment).

@oesteban
Copy link
Member

I think this has been addressed. Please reopen if necessary.

@claytonjschneider
Copy link

claytonjschneider commented Apr 21, 2022

Hi all, while it seems this problem has a solution in singularity via SINGULARITY_TEMPLATEFLOW_HOME, I have not been able to find a similar workaround using the FMRIPREP Docker container. It seems that $HOME is also reset there, and I tried passing in $TEMPLATEFLOW_HOME as an -e flag to a user-independent location on our server, without any luck. Once FMRIPREP is running, it is still looking for templates in /home/fmriprep/.cache/templateflow. Any help would be greatly appreciated! @oesteban

Description:  Could not create IO object for reading file /home/fmriprep/.cache/templateflow/tpl-MNIPediatricAsym/cohort-2/tpl-MNIPediatricAsym_cohort-2_res-1_T1w.nii.gz
  Tried to create one of the following:
    BMPImageIO
    BioRadImageIO
    Bruker2dseqImageIO
    GDCMImageIO
    GE4ImageIO
    GE5ImageIO
    GiplImageIO
    HDF5ImageIO
    JPEGImageIO
    JPEG2000ImageIO
    LSMImageIO
    MGHImageIO
    MINCImageIO
    MRCImageIO
    MetaImageIO
    NiftiImageIO
    NrrdImageIO
    PNGImageIO
    StimulateImageIO
    TIFFImageIO
    VTKImageIO
  You probably failed to set a file suffix, or
    set the suffix to an unsupported type.


 file /home/fmriprep/.cache/templateflow/tpl-MNIPediatricAsym/cohort-2/tpl-MNIPediatricAsym_cohort-2_res-1_T1w.nii.gz

Here's my current script:

        docker run --rm -u $( id -u)\
                --name FMRIPREP_$(basename $(dirname $1))_$sub \
                -v $1:/data:ro \
                -v $1/derivatives:/out \
                -v $1/work:/work \
                -v $(dirname $1)/bids-database:/bids-database \
                -v $FREESURFER_HOME/license.txt:/opt/freesurfer/license.txt \
                -v /data/perlman/moochie/resources/templateflow:/home/fmriprep/.cache/templateflow \
                -e TEMPLATEFLOW_HOME='/home/fmriprep/.cache/templateflow' \
                nipreps/fmriprep:latest \
                /data /out/fmriprep \
                participant \
                -w /work \
                --longitudinal \
                --skip_bids_validation \
                --use-aroma \
                --nthreads 16 \
                --low-mem \
                --mem-mb 15000 \
                --output-spaces MNIPediatricAsym:cohort-2:res-2 \
                --participant-label $sub &

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants