-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimal run args for running on HPC? #387
Comments
Are you saying that your job waits in your cluster queue for 3h or that the
jobs starts but mriqc is idle for 3h before it starts doing any computation?
…On Wed, Feb 15, 2017 at 2:36 PM, Mathias Goncalves ***@***.*** > wrote:
This is my current command:
mriqc --use-plugin plugin.yml --n_procs 8 --mem_gb 30 [mandatories]
plugin.yml
{plugin: 'SLURM', plugin_args: {'sbatch_args: --time=1-00:00:00 --mem=30GB -c 4'}}
I have to wait around 3 hours before the workflows actually start to run,
and even then seems to take longer than expected. Any suggestions for
improving this?
*I'm running on Prisma data with multi-slice acquisition, functional are
about 300 vols, 2-4 runs each
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#387>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAOkp1IsztK1ktBmQinj3A1eYmwvlSxYks5rc33YgaJpZM4MCVt6>
.
|
The job starts but it seems it is building the functional workflows for almost 3 hours.
… On Feb 15, 2017, at 6:07 PM, Chris Filo Gorgolewski ***@***.***> wrote:
Are you saying that your job waits in your cluster queue for 3h or that the
jobs starts but mriqc is idle for 3h before it starts doing any computation?
On Wed, Feb 15, 2017 at 2:36 PM, Mathias Goncalves ***@***.***
> wrote:
> This is my current command:
>
> mriqc --use-plugin plugin.yml --n_procs 8 --mem_gb 30 [mandatories]
>
> plugin.yml
> {plugin: 'SLURM', plugin_args: {'sbatch_args: --time=1-00:00:00 --mem=30GB -c 4'}}
>
> I have to wait around 3 hours before the workflows actually start to run,
> and even then seems to take longer than expected. Any suggestions for
> improving this?
>
> *I'm running on Prisma data with multi-slice acquisition, functional are
> about 300 vols, 2-4 runs each
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#387>, or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAOkp1IsztK1ktBmQinj3A1eYmwvlSxYks5rc33YgaJpZM4MCVt6>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This is very unusual - could you share a dataset that you get this
performance issues with?
On Wed, Feb 15, 2017 at 4:49 PM, Mathias Goncalves <notifications@github.com
… wrote:
The job starts but it seems it is building the functional workflows for
almost 3 hours.
> On Feb 15, 2017, at 6:07 PM, Chris Filo Gorgolewski <
***@***.***> wrote:
>
> Are you saying that your job waits in your cluster queue for 3h or that
the
> jobs starts but mriqc is idle for 3h before it starts doing any
computation?
>
> On Wed, Feb 15, 2017 at 2:36 PM, Mathias Goncalves <
***@***.***
> > wrote:
>
> > This is my current command:
> >
> > mriqc --use-plugin plugin.yml --n_procs 8 --mem_gb 30 [mandatories]
> >
> > plugin.yml
> > {plugin: 'SLURM', plugin_args: {'sbatch_args: --time=1-00:00:00
--mem=30GB -c 4'}}
> >
> > I have to wait around 3 hours before the workflows actually start to
run,
> > and even then seems to take longer than expected. Any suggestions for
> > improving this?
> >
> > *I'm running on Prisma data with multi-slice acquisition, functional
are
> > about 300 vols, 2-4 runs each
> >
> > —
> > You are receiving this because you are subscribed to this thread.
> > Reply to this email directly, view it on GitHub
> > <#387>, or mute the thread
> > <https://github.com/notifications/unsubscribe-auth/
AAOkp1IsztK1ktBmQinj3A1eYmwvlSxYks5rc33YgaJpZM4MCVt6>
> > .
> >
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub, or mute the thread.
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#387 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAOkp2yN8mvYS-h6q4I452Q6T5cO8jL9ks5rc50vgaJpZM4MCVt6>
.
|
A step towards some recommendations for nipreps#387 and nipreps#388
Hi @mgxd, I was coming back to this and realized that you are using the SLURM plugin. I don't really know the details on how the plugin is built in nipype, but that long set-up time seems to me very related to the plugin choice. When we run mriqc on our HPC these are our settings:
For parallelizing these many process you could use a job-array. If you are not allowed to use job arrays, then a solution like launcher may do. I am about to update the documentation with some profiling we've been doing on mriqc. Does this reply your question? |
@oesteban thanks for the info, I'm looking forward to the documentation! |
This is my current command:
I have to wait around 3 hours before the workflows actually start to run, and even then seems to take longer than expected. Any suggestions for improving this?
*I'm running on Prisma data with multi-slice acquisition, functional are about 300 vols, 2-4 runs each
The text was updated successfully, but these errors were encountered: