New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
all rules should have resources
#103
Comments
You observed that behaviour for local executions, right? With a regular 64GByte memory machine the default settings will start as many STAR instances as there are number of jobs allowed, which was 6 for as. This happily violated the constraint that 6*16GB (memory for star_map) >64GB (memory of machine) contraint. |
We've been using hisat2 for our tests on memory-restricted machines. (STAR has somewhat adventurous memory requirements, which makes it hard to use it in our current round of tests.)
Yeah, I agree :) I would like to avoid having to globally limit the number of jobs, because the scheduler should be able to respect our constraints. That's why I'd like us to mark up all rules first, so that we can rule out the bug in this obvious place. |
I have updated snakemake to version 6.9.1 on Debian and do not see this problem any more. May I ask about what platform you made your observations with? |
We had these problems on AWS. The problem is not easily reproduced because it depends on when snakemake schedules tasks on the actual machine. I'll upgrade snakemake as well now. |
On a system with limited resources snakemake will sometimes schedule too many concurrent tasks, which leads to unexpected memory exhaustion. This is hard to reproduce, but it is a result of not having a
resources
section for all rules.We could avoid this by adding a
resources
section to all rules, not just the known memory intensive rules.The text was updated successfully, but these errors were encountered: