Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to find index Success(WomInteger(0)) #15

Closed
chengwsh opened this issue Jul 10, 2018 · 14 comments
Closed

Failed to find index Success(WomInteger(0)) #15

chengwsh opened this issue Jul 10, 2018 · 14 comments

Comments

@chengwsh
Copy link

2018-07-08 17:12:05,62] [info] Running with database db.url = jdbc:hsqldb:mem:b928854f-1b26-4f74-8a45-e72f73b30968;shutdown=false;hsqldb.tx=mvcc
[2018-07-08 17:12:11,01] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2018-07-08 17:12:11,02] [info] [RenameWorkflowOptionsInMetadata] 100%
[2018-07-08 17:12:11,10] [info] Running with database db.url = jdbc:hsqldb:mem:b55a6427-b864-4e72-9858-1211b6533178;shutdown=false;hsqldb.tx=mvcc
[2018-07-08 17:12:11,42] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines .v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2018-07-08 17:12:11,42] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2018-07-08 17:12:11,43] [info] Using noop to send events.
[2018-07-08 17:12:11,69] [info] Slf4jLogger started
[2018-07-08 17:12:11,87] [info] Workflow heartbeat configuration:
{
"cromwellId" : "cromid-df3d320",
"heartbeatInterval" : "2 minutes",
"ttl" : "10 minutes",
"writeBatchSize" : 10000,
"writeThreshold" : 10000
}
[2018-07-08 17:12:11,91] [info] Metadata summary refreshing every 2 seconds.
[2018-07-08 17:12:11,95] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2018-07-08 17:12:11,95] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-07-08 17:12:11,95] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2018-07-08 17:12:12,71] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] JES batch polling interval is 33333 milliseconds
[2018-07-08 17:12:12,73] [info] PAPIQueryManager Running with 3 workers
[2018-07-08 17:12:12,74] [info] SingleWorkflowRunnerActor: Submitting workflow
[2018-07-08 17:12:12,78] [info] Unspecified type (Unspecified version) workflow 068f45d8-f29a-4335-8c0e-a711391af811 submitted
[2018-07-08 17:12:12,83] [info] SingleWorkflowRunnerActor: Workflow submitted 068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,83] [info] 1 new workflows fetched
[2018-07-08 17:12:12,83] [info] WorkflowManagerActor Starting workflow 068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,84] [info] WorkflowManagerActor Successfully started WorkflowActor-068f45d8-f29a-4335-8c0e-a711391af811
[2018-07-08 17:12:12,84] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2018-07-08 17:12:12,84] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2018-07-08 17:12:12,85] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2018-07-08 17:12:12,89] [info] MaterializeWorkflowDescriptorActor [068f45d8]: Parsing workflow as WDL draft-2
[2018-07-08 17:13:26,22] [info] MaterializeWorkflowDescriptorActor [068f45d8]: Call-to-Backend assignments: chip.macs2 -> Local, chip.bam2ta_ctl -> Local, chip.spp_ppr2 -> Local, chip.bwa -> Local, chip .qc_report -> Local, chip.bwa_ctl -> Local, chip.filter -> Local, chip.overlap -> Local, chip.pool_ta -> Local, chip.idr_ppr -> Local, chip.filter_ctl -> Local, chip.macs2_pr2 -> Local, chip.spp_pr1 -> Local, chip.read_genome_tsv -> Local, chip.merge_fastq_ctl -> Local, chip.macs2_ppr1 -> Local, chip.pool_ta_pr2 -> Local, chip.trim_fastq -> Local, chip.overlap_ppr -> Local, chip.bam2ta -> Local, chip. pool_ta_ctl -> Local, chip.reproducibility_idr -> Local, chip.spp_pooled -> Local, chip.fraglen_mean -> Local, chip.spr -> Local, chip.bam2ta_no_filt -> Local, chip.macs2_pr1 -> Local, chip.fingerprint -> Local, chip.xcor -> Local, chip.spp_pr2 -> Local, chip.spp -> Local, chip.merge_fastq -> Local, chip.bam2ta_no_filt_R1 -> Local, chip.reproducibility_overlap -> Local, chip.overlap_pr -> Local, chip. idr -> Local, chip.macs2_ppr2 -> Local, chip.bwa_R1 -> Local, chip.spp_ppr1 -> Local, chip.macs2_pooled -> Local, chip.idr_pr -> Local, chip.choose_ctl -> Local, chip.pool_ta_pr1 -> Local
[2018-07-08 17:13:26,34] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,35] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,36] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:26,37] [warn] Local [068f45d8]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2018-07-08 17:13:28,66] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.read_genome_tsv
[2018-07-08 17:13:28,67] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!align_only && !true_rep_only'. Running conditional section
[2018-07-08 17:13:28,68] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!true_rep_only'. Running conditional section
[2018-07-08 17:13:28,87] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:29,34] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: echo "Reading genome_tsv /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068 f45d8-f29a-4335-8c0e-a711391af811/call-read_genome_tsv/inputs/1532045310/mm10.tsv ..."
[2018-07-08 17:13:29,46] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d 8-f29a-4335-8c0e-a711391af811/call-read_genome_tsv/execution/script
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition NOT met: 'peak_caller_ == "macs2"'. Bypassing conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'enable_idr'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'enable_idr'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'peak_caller_ == "spp"'. Running conditional section
[2018-07-08 17:13:29,70] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: 'peak_caller_ == "spp"'. Running conditional section
[2018-07-08 17:13:29,71] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Condition met: '!align_only && !true_rep_only && enable_idr'. Running conditional section
[2018-07-08 17:13:31,99] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: job id: 18903
[2018-07-08 17:13:32,00] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.read_genome_tsv:NA:1]: Status change from - to Done
[2018-07-08 17:13:32,80] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.merge_fastq_ctl, chip.merge_fastq
[2018-07-08 17:13:33,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:33,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: python $(which encode_merge_fastq.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-merge_fastq_ctl/shard-0/execution/write_tsv_609523603b8830d2bf2d45a4a71d8dd7.tmp

--nth 2
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: python $(which encode_merge_fastq.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-merge_fastq/shard-0/execution/write_tsv_577d047f7bb8a8ea8c6ee89ee3d97c7b.tmp

--nth 2
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29 a-4335-8c0e-a711391af811/call-merge_fastq/shard-0/execution/script
[2018-07-08 17:13:33,85] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8 -f29a-4335-8c0e-a711391af811/call-merge_fastq_ctl/shard-0/execution/script
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: job id: 18944
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: job id: 18946
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq:0:1]: Status change from - to Done
[2018-07-08 17:13:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.merge_fastq_ctl:0:1]: Status change from - to Done
[2018-07-08 17:13:38,92] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.bwa, chip.bwa_ctl
[2018-07-08 17:13:39,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Unrecognized runtime attribute keys: preemptible, disks, cpu, time, memory
[2018-07-08 17:13:39,74] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Unrecognized runtime attribute keys: preemptible, disks, cpu, time, memory
[2018-07-08 17:13:39,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: python $(which encode_bwa.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa/shard-0/inputs/1424220334/mm10_no_alt_analysis_set_ENCODE.fasta.tar
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa/shard-0/inputs/-1993312639/merge_fastqs_R1_RYBP.merged.fastq.gz

--nth 4
[2018-07-08 17:13:39,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: python $(which encode_bwa.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa_ctl/shard-0/inputs/1424220334/mm10_no_alt_analysis_set_ENCODE.fasta.tar
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bwa_ctl/shard-0/inputs/1654728541/merge_fastqs_R1_IgG.merged.fastq.gz

--nth 4
[2018-07-08 17:13:39,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-43 35-8c0e-a711391af811/call-bwa_ctl/shard-0/execution/script
[2018-07-08 17:13:39,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8 c0e-a711391af811/call-bwa/shard-0/execution/script
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: job id: 19040
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: job id: 19041
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 17:13:41,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:03:30,01] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa_ctl:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:03:35,69] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.filter_ctl
[2018-07-08 18:03:35,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:03:35,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: python $(which encode_filter.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-filter_ctl/shard-0/inputs/1613485654/IgG.merged.bam


--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null
[2018-07-08 18:03:35,88] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a -4335-8c0e-a711391af811/call-filter_ctl/shard-0/execution/script
[2018-07-08 18:03:36,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: job id: 1170
[2018-07-08 18:03:36,99] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:07:57,26] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bwa:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:08:02,93] [info] WorkflowExecutionActor-068f45d8-f29a-4335-8c0e-a711391af811 [068f45d8]: Starting chip.bam2ta_no_filt, chip.filter
[2018-07-08 18:08:03,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:08:03,73] [warn] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2018-07-08 18:08:03,74] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: python $(which encode_bam2ta.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-bam2ta_no_filt/shard-0/inputs/600163450/RYBP.merged.bam

--disable-tn5-shift
--regex-grep-v-ta 'chrM'
--subsample 0
[2018-07-08 18:08:03,74] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8- f29a-4335-8c0e-a711391af811/call-bam2ta_no_filt/shard-0/execution/script
[2018-07-08 18:08:03,76] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: python $(which encode_filter.py)
/home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-4335-8c0e-a711391af811/call-filter/shard-0/inputs/600163450/RYBP.merged.bam


--dup-marker picard
--mapq-thresh 30
\

ugly part to deal with optional outputs with Google JES backend

touch null
[2018-07-08 18:08:03,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: executing: /bin/bash /home/chengwsh/Encode/chip-seq-pipeline2/cromwell-executions/chip/068f45d8-f29a-433 5-8c0e-a711391af811/call-filter/shard-0/execution/script
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: job id: 2502
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: job id: 2516
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:08:06,97] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Status change from - to WaitingForReturnCodeFile
[2018-07-08 18:10:26,92] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.bam2ta_no_filt:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:11:32,77] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter_ctl:0: 1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:16:25,24] [info] BackgroundConfigAsyncJobExecutionActor [068f45d8chip.filter:0:1]: Status change from WaitingForReturnCodeFile to Done
[2018-07-08 18:16:26,14] [error] WorkflowManagerActor Workflow 068f45d8-f29a-4335-8c0e-a711391af81 1 failed (during ExecutingWorkflowState): Failed to evaluate job outputs:
Bad output 'filter_ctl.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'filter_ctl.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter_ctl.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1 (StandardAsyncExecutionActor.scala:786)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfig urator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Failed to evaluate job outputs:
Bad output 'filter.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
cromwell.backend.standard.StandardAsyncExecutionActor$$anon$2: Failed to evaluate job outputs:
Bad output 'filter.nodup_bai': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.flagstat_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.dup_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
Bad output 'filter.pbc_qc': Failed to find index Success(WomInteger(0)) on array:

Success([])

0
at cromwell.backend.standard.StandardAsyncExecutionActor.$anonfun$handleExecutionSuccess$1 (StandardAsyncExecutionActor.scala:786)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfig urator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2018-07-08 18:16:26,14] [info] WorkflowManagerActor WorkflowActor-068f45d8-f29a-4335-8c0e-a711391 af811 is in a terminal state: WorkflowFailedState
[2018-07-08 18:17:28,68] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
[2018-07-08 18:17:31,99] [info] Workflow polling stopped
[2018-07-08 18:17:32,12] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds
[2018-07-08 18:17:32,12] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds
[2018-07-08 18:17:32,15] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds
[2018-07-08 18:17:32,17] [info] Aborting all running workflows.
[2018-07-08 18:17:32,19] [info] JobExecutionTokenDispenser stopped
[2018-07-08 18:17:32,19] [info] WorkflowStoreActor stopped
[2018-07-08 18:17:32,24] [info] WorkflowLogCopyRouter stopped
[2018-07-08 18:17:32,24] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds
[2018-07-08 18:17:32,24] [info] WorkflowManagerActor stopped
[2018-07-08 18:17:32,24] [info] WorkflowManagerActor All workflows finished
[2018-07-08 18:17:32,24] [info] Connection pools shut down
[2018-07-08 18:17:32,24] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down JobStoreActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] SubWorkflowStoreActor stopped
[2018-07-08 18:17:32,24] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down DockerHashActor - Timeout = 1800 seconds
[2018-07-08 18:17:32,24] [info] Shutting down IoProxy - Timeout = 1800 seconds
[2018-07-08 18:17:32,27] [info] DockerHashActor stopped
[2018-07-08 18:17:32,27] [info] IoProxy stopped
[2018-07-08 18:17:32,30] [info] KvWriteActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] WriteMetadataActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] CallCacheWriteActor Shutting down: 0 queued messages to process
[2018-07-08 18:17:32,30] [info] CallCacheWriteActor stopped
[2018-07-08 18:17:32,33] [info] JobStoreActor stopped
[2018-07-08 18:17:32,34] [info] ServiceRegistryActor stopped
[2018-07-08 18:17:32,36] [info] Database closed
[2018-07-08 18:17:32,36] [info] Stream materializer shut down
Workflow 068f45d8-f29a-4335-8c0e-a711391af811 transitioned to state Failed
[2018-07-08 18:17:32,46] [info] Automatic shutdown of the async connection
[2018-07-08 18:17:32,46] [info] Gracefully shutdown sentry threads.
[2018-07-08 18:17:32,46] [info] Shutdown finished.

@leepc12
Copy link
Contributor

leepc12 commented Jul 10, 2018

Please run the following on the working directory where you ran the pipeline and post debug.tar here. This will make

find . -type f \
-name 'stdout' -or \
-name 'stderr' -or \
-name 'script' -or \
-name '*.qc' -or \
-name '*.txt' -or \
-name '*.log \
| xargs tar -cvf debug.tar

@chengwsh
Copy link
Author

debug (2).zip
Thank you.

@leepc12
Copy link
Contributor

leepc12 commented Jul 11, 2018

Please check if you have picard.jar in your $PATH after activating Conda environment.

$ source activate encode-chip-seq-pipeline
$ which picard.jar
$ which samtools
$ echo $PATH

@kirstyjamieson
Copy link

Hi Jin, I am also getting this error with the test data.

-bash-4.1$ export PATH="/netapp/home/kirsty.jamieson/miniconda3/bin:$PATH"
-bash-4.1$ source activate encode-chip-seq-pipeline
(encode-chip-seq-pipeline) -bash-4.1$ which picard.jar
~/miniconda3/envs/encode-chip-seq-pipeline/bin/picard.jar
(encode-chip-seq-pipeline) -bash-4.1$ which samtools
~/miniconda3/envs/encode-chip-seq-pipeline/bin/samtools
(encode-chip-seq-pipeline) -bash-4.1$ echo $PATH
/netapp/home/kirsty.jamieson/miniconda3/envs/encode-chip-seq-pipeline/bin:/netapp/home/kirsty.jamieson/miniconda3/bin:/usr/local/sge/bin/linux-x64:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/netopt/bin

examples/local/ENCSR936XTK_subsampled.json

{
    "chip.pipeline_type" : "tf",
    "chip.genome_tsv" : "test_genome_database/hg38_local.tsv",
    "chip.fastqs" : [
        [["test_sample/ENCSR936XTK/fastq_subsampled/rep1-R1.subsampled.67.fastq.gz",
          "test_sample/ENCSR936XTK/fastq_subsampled/rep1-R2.subsampled.67.fastq.gz"]],
        [["test_sample/ENCSR936XTK/fastq_subsampled/rep2-R1.subsampled.67.fastq.gz",
          "test_sample/ENCSR936XTK/fastq_subsampled/rep2-R2.subsampled.67.fastq.gz"]]
    ],
    "chip.ctl_fastqs" : [
        [["test_sample/ENCSR936XTK/fastq_subsampled/ctl1-R1.subsampled.80.fastq.gz",
          "test_sample/ENCSR936XTK/fastq_subsampled/ctl1-R2.subsampled.80.fastq.gz"]],
        [["test_sample/ENCSR936XTK/fastq_subsampled/ctl2-R1.subsampled.80.fastq.gz",
          "test_sample/ENCSR936XTK/fastq_subsampled/ctl2-R2.subsampled.80.fastq.gz"]]
    ],

    "chip.paired_end" : true,

    "chip.always_use_pooled_ctl" : true,
    "chip.title" : "ENCSR936XTK (subsampled 1/67)",
    "chip.description" : "ZNF143 ChIP-seq on human GM12878"
}

test.sh.txt
test.sh.o522226.txt
debug_[#15_KJ_12_5_18].tar.gz

@leepc12
Copy link
Contributor

leepc12 commented Dec 5, 2018

Error:

PID=52604: [Tue Dec 04 15:05:13 PST 2018] picard.sam.markduplicates.MarkDuplicates INPUT=[rep1-R1.subsampled.67.merged.filt.bam] OUTPUT=rep1-R1.subsampled.67.merged.dupmark.bam METRICS_FILE=rep1-R1.subsampled.67.merged.dup.qc REMOVE_DUPLICATES=false ASSUME_SORTED=true VALIDATION_STRINGENCY=LENIENT    MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP=50000 MAX_FILE_HANDLES_FOR_READ_ENDS_MAP=8000 SORTING_COLLECTION_SIZE_RATIO=0.25 TAG_DUPLICATE_SET_MEMBERS=false REMOVE_SEQUENCING_DUPLICATES=false TAGGING_POLICY=DontTag DUPLICATE_SCORING_STRATEGY=SUM_OF_BASE_QUALITIES PROGRAM_RECORD_ID=MarkDuplicates PROGRAM_GROUP_NAME=MarkDuplicates READ_NAME_REGEX=<optimized capture of last three ':' separated fields as numeric values> OPTICAL_DUPLICATE_PIXEL_DISTANCE=100 VERBOSITY=INFO QUIET=false COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false
PID=52604: [Tue Dec 04 15:05:13 PST 2018] Executing as kirsty.jamieson@id69 on Linux 2.6.32-696.30.1.el6.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Picard version: 2.10.6-SNAPSHOT
PID=52604: INFO 2018-12-04 15:05:14     MarkDuplicates  Start of doWork freeMemory: 22135416; totalMemory: 28311552; maxMemory: 4294967296
PID=52604: INFO 2018-12-04 15:05:14     MarkDuplicates  Reading input file and constructing read end information.
PID=52604: INFO 2018-12-04 15:05:14     MarkDuplicates  Will retain up to 15561475 data points before spilling to disk.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Read 909380 records. 0 pairs never matched.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  After buildSortedReadEndLists freeMemory: 580621504; totalMemory: 811597824; maxMemory: 4294967296
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Will retain up to 134217728 duplicate indices before spilling to disk.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Traversing read pair information and detecting duplicates.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Traversing fragment information and detecting duplicates.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Sorting list of duplicate records.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  After generateDuplicateIndexes freeMemory: 804662104; totalMemory: 1886388224; maxMemory: 4294967296
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Marking 1850 records as duplicates.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Found 3 optical duplicate clusters.
PID=52604: INFO 2018-12-04 15:05:22     MarkDuplicates  Reads are assumed to be ordered by: coordinate
PID=52604: #
PID=52604: # A fatal error has been detected by the Java Runtime Environment:
PID=52604: #
PID=52604: #  SIGSEGV (0xb) at pc=0x00002b3fc6568344, pid=52604, tid=52666
PID=52604: #
PID=52604: # JRE version: OpenJDK Runtime Environment (11.0.1+13) (build 11.0.1+13-LTS)
PID=52604: # Java VM: OpenJDK 64-Bit Server VM (11.0.1+13-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
PID=52604: # Problematic frame:
PID=52604: # V  [libjvm.so+0x7f8344]  G1ParScanThreadState::copy_to_survivor_space(InCSetState, oopDesc*, markOopDesc*)+0x334
PID=52604: #
PID=52604: # Core dump will be written. Default location: /scrapp/kirsty.jamieson/chip-seq-pipeline2/cromwell-executions/chip/27f31472-68b7-48aa-a66f-0f8e98299750/call-filter/shard-0/execution/core.52604
PID=52604: #
PID=52604: # An error report file with more information is saved as:
PID=52604: # /scrapp/kirsty.jamieson/chip-seq-pipeline2/cromwell-executions/chip/27f31472-68b7-48aa-a66f-0f8e98299750/call-filter/shard-0/execution/hs_err_pid52604.log
PID=52604: #
PID=52604: # If you would like to submit a bug report, please visit:
PID=52604: #   http://www.azulsystems.com/support/
PID=52604: #

This should be fixed in v1.1.3. f8f2e88

Please git pull to update the pipeline and try again.

@kirstyjamieson
Copy link

I updated the pipeline and got the same error, so I reinstalled the pipeline again. I get a new error this time

Uncaught error from thread [cromwell-system-akka.dispatchers.backend-dispatcher-62]: cromwell/backend/async/FailedNonRetryableExecutionHandle, shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[cromwell-system]

debug_[#15_KJ_12_7_18].tar.gz
test.sh.o527395.txt
test.sh.txt

@leepc12
Copy link
Contributor

leepc12 commented Dec 7, 2018

Did you download genome database correctly on test_genome_database/? One of the files in it is missing.

gzip: /scrapp/kirsty.jamieson/chip-seq-pipeline2/cromwell-executions/chip/56c459cc-49af-4b86-8425-5a5423b2377d/call-spp/shard-1/inputs/1911427029/hg38.blacklist.bed.gz: No such file or directory

@kirstyjamieson
Copy link

I did, but now I am noticing that cromwell-34.jar, test_sample/ and test_genome_database/ have all disappeared after this failed run.

@leepc12
Copy link
Contributor

leepc12 commented Dec 7, 2018

You mean those files and directories on /scrapp/kirsty.jamieson/chip-seq-pipeline2?

@kirstyjamieson
Copy link

Yes

@leepc12
Copy link
Contributor

leepc12 commented Dec 7, 2018

It's weird. Pipeline does not delete any file on a working directory. Do you have any disk limit/quota on your pipeline git directory. $HOME usually has limited quota on HPC. I think you need to contact with your system admin. Or run it on a scratch directory.

@kirstyjamieson
Copy link

scrapp/ is a scratch directory with no individual user limits, 511 GB available. I had this issue early on with the atac-seq-pipeline ENCODE-DCC/atac-seq-pipeline#55 (comment).

I was also wondering where you found this error:
gzip: /scrapp/kirsty.jamieson/chip-seq-pipeline2/cromwell-executions/chip/56c459cc-49af-4b86-8425-5a5423b2377d/call-spp/shard-1/inputs/1911427029/hg38.blacklist.bed.gz: No such file or directory because I didn't see it in cromwell-workflow-logs/

@leepc12
Copy link
Contributor

leepc12 commented Dec 10, 2018

Log files are on cromwell-executions/chip/ not on cromwell-workflow-logs/. I found it in the following log file:

./cromwell-executions/chip/2793a6b9-72d7-4e23-832a-e7619955e11a/call-spp_pr1/shard-1/execution/stderr

Is the entire directory (test_sample/ and test_genome_database/) missing after running a pipeline?
Does your cluster's file system /scrapp/ allow hard-linking of files?
I think you need to figure out the disk problem first. Pipeline cannot work without data on test_sample/ and test_genome_database/.

@leepc12
Copy link
Contributor

leepc12 commented Sep 9, 2019

Closing this due to long inactivity

@leepc12 leepc12 closed this as completed Sep 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants