You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working on a slurm cluster that does not allow the cpus-per-task command prevented me from running the workflow. I just commented out the bits of slurm_utils.py that have this cpus-per-task and it worked fine. Suggest that we have cpus-per-task as a configurable option in the yaml rather than hard coded, but I am not sure how to best approach this. See this branch edits as an example of what solved my issue [uppmax branch(https://github.com/harvardinformatics/snpArcher/commit/24b3831a11bc3ed6968d4e6f18a068e9e2618351)
The text was updated successfully, but these errors were encountered:
@tsackton I have been reviewing this issue (#148) and seems like this code that I commented out in this issue may no longer be needed. Can we remove cpus-per-task from slurm-utils safely? with the --slurm flag is it safe to just set this in the profile itself, rather than as a default option here. It would certainly make this more flexible to other slurm systems
Probably we need to rethink the entire profile setup here. We shouldn't actually be distributing a cluster-specific profile with snpArcher; we should instead point people to https://github.com/Snakemake-Profiles/doc or similar. I am trying to read more about this and figure out the best options - will hopefully test a bit locally this week.
Working on a slurm cluster that does not allow the cpus-per-task command prevented me from running the workflow. I just commented out the bits of slurm_utils.py that have this cpus-per-task and it worked fine. Suggest that we have cpus-per-task as a configurable option in the yaml rather than hard coded, but I am not sure how to best approach this. See this branch edits as an example of what solved my issue [uppmax branch(https://github.com/harvardinformatics/snpArcher/commit/24b3831a11bc3ed6968d4e6f18a068e9e2618351)
The text was updated successfully, but these errors were encountered: