New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
process memory: some (many?) sge clusters use h_vmem, not virtual_free #332
Comments
Reading more about gridengine, I realize that
So I can understand that nextflow doesn't need to support Close this issue if you agree... thanks. |
One more observation: While this:
results in ...It breaks when I try to dynamically set memory, like this:
this results in Seems that by multiplying the memory by task.attempt, the closure has somehow changed the memory object. |
you need to reference it as |
Keep in mind that consumables like
You can't use the same value for both characteristics unless you're asking for a single slot. |
@pditommaso -- Hmmm, I get @hartzell -- yes, I was planning on dividing the number by the number of requested slots (accessed via the |
There's a glitch in the syntax,
|
Oops, thanks for this. I will add this info to the google group to provide reference for others. At one point I noticed the = sign, but since it was working (without All working now! Thanks! |
Hi, I'd like to try this, but I don't really use nextflow apart from running one pipeline (distiller) that was made by other people. Can someone please show how to add the division by number of cores to this syntax? |
Use the discussion forum please https://groups.google.com/forum/#!forum/nextflow |
Hi, when I use @pditommaso 's approach, removing the '=' sign,
I still get a similar error to this:
in Thanks! |
Please a replicable test case in a separate issue, please. |
As @pditommaso mentioned, it should be
solved the issue |
Hello @pditommaso,
Should give 128Gb of memory per core using the long configuration. However, when I look at the
The workflow uses the correct memory specification for the "normal" configuration (mem_free), but it uses the generic configuration when it comes to h_vmem, preventing my works to finish correctly. I should be able to hard code resources into the single processes, but that would limit the flexibility of the workflow itself. Is there some solution? Am i placing the configuration in the wrong place? Thank you in advance for your help |
You should make the cluster option evaluated dynamically on the actual task memory value, therefore you should use the
|
@pditommaso just to get it right, something like this:
Am I correct? |
Sorry
|
Great, I'll try that later today! Thank you!
…________________________________
From: Paolo Di Tommaso <notifications@github.com>
Sent: Thursday, December 24, 2020 11:17:53 AM
To: nextflow-io/nextflow <nextflow@noreply.github.com>
Cc: RenzoTale88 <talent88@hotmail.it>; Comment <comment@noreply.github.com>
Subject: Re: [nextflow-io/nextflow] process memory: some (many?) sge clusters use h_vmem, not virtual_free (#332)
Sorry task.memory not memory
withLabel: small{
cpus = 1
memory = { 4.GB * task.attempt }
time = {6.h * task.attempt }
clusterOptions = { "-l h_vmem=${task.memory.toString().replaceAll(/[\sB]/,'')}" }
}
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#332 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFRTPKFB4W6RHJTFJ4BSKF3SWMINDANCNFSM4DKAL4FQ>.
|
@pditommaso it worked perfectly, thank you! |
When using the sge executor, setting
memory 16.GB
in a process results in-l virtual_free=16G
appearing the header of the.command.run
file.Some sge clusters don't pay attention to this, and instead use
h_vmem
(I'm not sure how common/uncommon this is!)Of course, one can use
clusterOptions '-l h_vmem=16G'
but then one can't take advantage of the retry mechanism afforded by dynamic computing resources.Could the way that sge interprets the
memory
directive be made configurable?PS... I can use this as a workaround, but it's ugly:
The text was updated successfully, but these errors were encountered: