You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, thank you for this great pipeline!
I noticed the following bug: For samples for which no peaks are called (i.e. the .peaks.bed.stringent.bed file is empty), the following error occurs during CALCULATE_FRIP : "RuntimeError: .peaks.bed.stringent.bed does not seem to be a recognized file type!"
My impression is that the problem lies here, in bin/frip.py. It seems like it would make sense to check if the .bed file is empty before calling crpb.CountReadsPerBin, and return 0 as the frip value in this case.
Note that I have replaced the real sample/group name with in the error log below due to confidentiality.
Command output:
Calculating .target.markdup.bam using .peaks.bed.stringent.bed
Command error:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-94b1cu4t because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Traceback (most recent call last):
File "/home/peter/.nextflow/assets/nf-core/cutandrun/bin/frip.py", line 44, in
reads_at_peaks = cr.run()
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptools/countReadsPerBin.py", line 356, in run
imap_res = mapReduce.mapReduce([],
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptools/mapReduce.py", line 85, in mapReduce
bed_interval_tree = GTF(bedFile, defaultGroup=defaultGroup, transcriptID=transcriptID, exonID=exonID, transcript_id_designator=transcript_id_designator, keepExons=keepExons)
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptoolsintervals/parse.py", line 591, in init
ftype = self.inferType(fp, line, labelColumn)
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptoolsintervals/parse.py", line 166, in inferType
raise RuntimeError('{0} does not seem to be a recognized file type!'.format(self.filename))
RuntimeError: .peaks.bed.stringent.bed does not seem to be a recognized file type!]
Expected behaviour
CALCULATE_FRIP on empty .bed files should return 0 instead of an error.
Log files
Have you provided the following extra information/files:
[x ] The command used to run the pipeline
The .nextflow.log file (sample names are confidential)
System
Hardware: HPC
Executor: local
OS: Debian
Version 4.9.88-1+deb9u1 (2018-05-07) x86_64
Nextflow Installation
Version: 21.10.0.5640
Container engine
Engine: Docker
version: Docker version 18.09.0, build 4d60db4
The text was updated successfully, but these errors were encountered:
Hi @peneder, this bug is fixed in the upcoming v1.1 version which should be out in a week or two. If you need to run the pipeline before then i suggest you run the dev branch using -r dev
Check Documentation
I have checked the following places for your error:
Description of the bug
First, thank you for this great pipeline!
I noticed the following bug: For samples for which no peaks are called (i.e. the .peaks.bed.stringent.bed file is empty), the following error occurs during CALCULATE_FRIP : "RuntimeError: .peaks.bed.stringent.bed does not seem to be a recognized file type!"
My impression is that the problem lies here, in bin/frip.py. It seems like it would make sense to check if the .bed file is empty before calling crpb.CountReadsPerBin, and return 0 as the frip value in this case.
Note that I have replaced the real sample/group name with in the error log below due to confidentiality.
Steps to reproduce
Steps to reproduce the behaviour:
Error executing process > 'NFCORE_CUTANDRUN:CUTANDRUN:CALCULATE_FRIP (H2BK15ac_Dox_s03_R1)'
Caused by:
Process
NFCORE_CUTANDRUN:CUTANDRUN:CALCULATE_FRIP (<group>)
terminated with an error exit status (1)Command executed:
frip.py
--bams ".bam"
--peaks ".bed"
--threads 12
--outpath .
python --version | grep -E -o "([0-9]{1,}.)+[0-9]{1,}" > python.version.txt
Command exit status:
1
Command output:
Calculating .target.markdup.bam using .peaks.bed.stringent.bed
Command error:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-94b1cu4t because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Traceback (most recent call last):
File "/home/peter/.nextflow/assets/nf-core/cutandrun/bin/frip.py", line 44, in
reads_at_peaks = cr.run()
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptools/countReadsPerBin.py", line 356, in run
imap_res = mapReduce.mapReduce([],
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptools/mapReduce.py", line 85, in mapReduce
bed_interval_tree = GTF(bedFile, defaultGroup=defaultGroup, transcriptID=transcriptID, exonID=exonID, transcript_id_designator=transcript_id_designator, keepExons=keepExons)
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptoolsintervals/parse.py", line 591, in init
ftype = self.inferType(fp, line, labelColumn)
File "/opt/conda/envs/packages/lib/python3.8/site-packages/deeptoolsintervals/parse.py", line 166, in inferType
raise RuntimeError('{0} does not seem to be a recognized file type!'.format(self.filename))
RuntimeError: .peaks.bed.stringent.bed does not seem to be a recognized file type!]
Expected behaviour
CALCULATE_FRIP on empty .bed files should return 0 instead of an error.
Log files
Have you provided the following extra information/files:
.nextflow.log
file (sample names are confidential)System
Nextflow Installation
Container engine
The text was updated successfully, but these errors were encountered: