-
Notifications
You must be signed in to change notification settings - Fork 702
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider other cluster definitions for task resource allocation #107
Comments
Hi @santiagorevale, I had a discussion with @alneberg about just this only a few days ago. Yes, I think that it should be possible. Something like: params.memory_per_core = 16.Gb
process {
cpus = 1
memory = { task.cpus * params.memory_per_core }
withName: 'bigprocess' {
cpus: 8
}
} This is aside from Is this kind of what you were thinking of? Phil |
Hi @ewels, The issue I'm referring is not exactly like that. It would be more like:
So I was thinking that maybe the following changes should make it work for most scenarios. Let me know what you think about it. Please, double check the code because I don't have experience in Groovy. nextflow.config def check_slots( cpus, memory ) {
if (params.hasProperty("memory_per_core")) {
memory_slots = (memory % params.memory_per_core) + 1
slots = Math.max(cpus, memory_slots)
} else {
slots = cpus
}
return check_max( slots, 'cpus' )
} base.conf // ...
// Re-define "cpus" property
withName:makeSTARindex {
cpus = { check_slots( 10, 80.GB * task.attempt ) }
memory = { check_max( 80.GB * task.attempt, 'memory' ) }
time = { check_max( 5.h * task.attempt, 'time' ) }
}
// ... Finally, the one scenario that is not being properly handled (also not a very common one) is what happens when you need to limit your # Software ABC requires 32 Gb per core
# Memory per core of your cluster is 16 Gb
# So you want to be able to do something like:
SLOTS=2
CPUS=1
qsub -pe shmem ${SLOTS} ./ABC --threads=${CPUS} I couldn't come up with a solution that doesn't involve incorporating an additional variable to distinguish between slots and cpus. Thoughts? Cheers, |
@pditommaso - any thoughts about handling this within core nextflow somehow? |
My understanding is that it just sets the process {
cpus = 1
memory = { task.cpus * params.memory_per_core }
} What I'm missing ? |
Yes I'm not totally sure either to be honest..
I don't see why you would ever need to fix a number like this? Even if the tool only uses 1 cpu you can presumably give the process 2? It wastes a little cpu but that usually can't be used by other tasks on the same node anyway. |
Tend to agree. Let's see what @santiagorevale says. |
Hi guys, The main goal is to be able to set up in However, the way it's currently set up: withName:makeSTARindex {
cpus = { check_max( 10, 'cpus' ) }
memory = { check_max( 80.GB * task.attempt, 'memory' ) }
time = { check_max( 5.h * task.attempt, 'time' ) }
} you are assigning:
This way of defining the task requirements is not compatible with a scenario where you can't queue a job based on memory requirements. Re-submitting this job by increasing required memory will result in the exact same output, because memory is allocated based on the number of CPUs specified. You may wonder: "why don't you redefine in your config file every task requirement to meet your needs?" And that's what I want to avoid. The task requirements should always be met regarding which engine you are picking.
In my scenario, the process {
memory = 64.GB
cpus = { task.memory / params.memory_per_core }
}
There are applications that allocate memory based on the number of cores so, if you specify 1 core, it will allocate 8 Gb of memory, 2 cores -> 16 Gb, and so on. Thus, specifying slots and CPUs become two independent things. See the following example: ScenarioThe software # 1 CPU assigned to the application
# 1 slot reserved -> 4 Gb of Memory reserved
# This job will fail.
qsub -b y -pe env 1 bash application -cpus 1 *.fastq.gz
# 1 CPU assigned to the application
# 2 slots reserved -> 8 Gb of Memory reserved
# This job will work.
qsub -b y -pe env 2 bash application -cpus 1 *.fastq.gz Please, let me know if this is still confusing. Cheers, |
You can do that, just add
|
Hi @pditommaso, Thanks for the tip. However, that's not the appropriate solution. Again, the idea is to define the requirements once and not having to do that again per profile. Currently, we are defining in // [...]
withName:dupradar {
cpus = { check_max( 1, 'cpus' ) }
memory = { check_max( 16.GB * task.attempt, 'memory' ) }
}
withName:featureCounts {
memory = { check_max( 16.GB * task.attempt, 'memory' ) }
}
// [...] Now, because the cluster I'm using reserves memory based on core slots, I have to re-define this on my profile file, which will be looking like: // [...]
withName:dupradar {
cpus = { check_max( task.memory.giga / params.memory_per_core, 'cpus' ) }
}
withName:featureCounts {
cpus = { check_max( task.memory.giga / params.memory_per_core, 'cpus' ) }
}
// [...] And I'll have to do that for every task. And if any memory requirement changes in the future, the profile would have to be updated again, and this is something we should be avoiding. |
I will close this as its rather a way of handling how to submit resource requirements than a particular issue with the pipeline. If this is of general concern, you can also add an issue with an improvement suggestion to nf-core/tools, and we might consider/discuss this for the template there. |
Hi there!
In the centre I'm working at, we use "SGE" as a job scheduler. The way slots are reserved for the jobs is core based. Given the following scenario:
if I have a job that uses 1 core and 16 Gb of RAM, then I have to ask for 4 cores to be able to run it properly.
My question then is: would it be possible to update the code somehow so memory/cpus validations would be automatically adjusted based on this? This way, we wouldn't be having to re-define the process requirements (at least for memory/cpus).
I was thinking maybe on adding a
memory_per_core
param and tweaking thecheck_max
function to consider this if defined?Let me know your thoughts or if you have any other idea to sort this out.
Thank you very much in advance!
Cheers,
Santiago
The text was updated successfully, but these errors were encountered: