-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ATAC pipeline run on slurm report error #31
Comments
Did you use |
yes, I tried, showed error report, so I wonder maybe since it is json format for "local".
I never change |
"local" here means running pipelines with downloaded (so locally existing) files. This looks like a SLURM problem. Does your SLURM sbatch take Please post an example sbatch command or shell script template you use for submitting your own job to SLURM. Also, can you run the following sbatch command and see what happens. Post any errors here. I will take a look.
BTW, I got your email but I cannot personally skype with you. |
I see, thanks! I just put this command line in shell as doc step8, I run in/mypath/atac-seq-pipeline/ For "run the following sbatch command and see what happens", it happens
slurm.json: |
@gmgitx: Your error says |
I got right account/partition from our IT after your mentioned, no error like that. But still can't work, for the document provide ENCSR356KRQ data. Part of warn and error:
|
I didn't mean a pipeline command line. I just wanted to see an example sbatch command line that you usually use. Is there an wiki page for your cluster? What is your sbatch command line to submit the following HelloWorld shell script
|
Sorry for my misunderstanding. guide for our cluster Here sbatch command: |
Are you sure that |
Yes, in the slurm-[number].out,
I removed account, seems same warning and error report. |
Please post a full log and also your |
######command #atac.sh
######slurm.json
######ENCSR356KRQ_subsampled.json #########################result |
I guess that you (or your partition) have a limited quota for resources on your cluster?
Do you have a privilege to use enough resources (memory>=16GB, cpu>=4, walltime>=48hr per task) on your partition? Please run the following on the working directory where you ran the pipeline. This will make a tar ball of all log files and please upload it here. I need it for debugging:
|
Thanks!
|
@gmgitx Your tarball does not have any file in it. |
Thanks! |
Please send that file |
I sent |
I got your log but it includes outputs from too many pipeline runs. But for the latest pipeline run, I found that the first task of the pipeline worked fine so you can keep using your partition What is resource quota on your cluster? How much resource your partition can use on your cluster? For example, maximum number of concurrent jobs, max cpu per job, max memory per job, max walltime per job. This information will be helpful for debugging it. Can you clean up ( |
Many thanks! According what i know from IT, memory>=16GB, cpu>=4 are allowed but walltime must be under 36 hours total my partition: |
Default walltime for bowtie2 is 48 hours. I think this caused the problem. Please add the following to your input JSON and try again.
Also, reduce the number of concurrent jobs to 1 or 2 in |
Thanks your kindly help.
###run_atac1.sh
Sorry for its error. However, thank you again. |
Your sbatch_report is trimmed? |
I combined two files (example_sbatch.err and example_sbatch.out) together, no other process |
Please take a look at the A log file in your tar ball says that some of the sub-tasks ( |
yes, you are right, here is a example_sbatch.out. But I think if sub-tasks were done successfully, fold "cromwell-executions" should be created and here not ###example_sbatch.out
|
Can you upload your modified input JSON here? |
Sure, thanks
####...atac-seq-pipeline/workflow_opts/slurm.json
|
I think this is a resource quota/limit problem on your cluster. Please play with some resource settings in your input JSON. You may need to revert back to the last partially successful configuration (for some tasks) somehow and change settings for resources. https://github.com/ENCODE-DCC/atac-seq-pipeline/blob/master/docs/input.md#resource Resource settings for one of your successful task ( |
Thanks! Although just have trim result, I'll continue to adjust resource settings.
Could I make sure it's right or something wrong for just get first two files's trim result for each rep of ENCSR356KRQ? |
Yes, these fastqs (two for each replicate) look fine. |
Closing this issue due to long inactivity. |
Hi, thanks your wonderful work.
I run in
/mypath/atac-seq-pipeline/
and
source activate encode-atac-seq-pipeline
java -jar -Dconfig.file=backends/backend.conf -Dbackend.default=slurm /my_path/local/bin/cromwell-34.jar run atac.wdl -i /my_path1/input.json -o /my_path2/atac-seq-pipeline/workflow_opts/slurm.json
But just one file named "cromwell-workflow-logs" left but nothing in it
Jenkinsfile LICENSE README.md atac.wdl backends conda **cromwell-workflow-logs** docker_image docs examples genome src test workflow_opts
What's more, when it was running, it shows the following on the screen:
I follow https://github.com/ENCODE-DCC/atac-seq-pipeline/blob/master/docs/tutorial_slurm.md, since I should run on my shool's slurm not my local pc but not stanford university‘s slurm.
Would you have any advice about my two errors?
The text was updated successfully, but these errors were encountered: