You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am testing the ITS sequencing data which are publicly available in NCBI Sequence Read Archive (SRA) under the BioProject ID PRJNA610042. Follow DADA2 ITS Pipeline Workflow (1.8).
The Primer F:TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG R:GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG
After the step Filter and trim, it outputs some fastq files with garbled characters. But I can still go for the following steps and get the taxonomic assignments.
Why is this happening? how to deal with it? if I want to check the filtered files, how to fix these garbled fastq files?
Thanks.
The text was updated successfully, but these errors were encountered:
By default filterAndTrim(..., compress=TRUE), and so gzipped fastq files are being output irrespective of the filenames (and extensions) you are assigning to the filtered fastqs. The fastq input/output for dada2 (all taken from the ShortRead package) autodetects and uncompresses those file, so all works as expected. But if you open it in a plain text editor, it will look garbled since it is in a compressed format.
You can "fix" this by setting compress=FALSE, uncompressing the files yourself before viewing, or by viewing the files using something like zmore that will show you the uncompressed output.
Hi,
I am testing the ITS sequencing data which are publicly available in NCBI Sequence Read Archive (SRA) under the BioProject ID PRJNA610042. Follow DADA2 ITS Pipeline Workflow (1.8).
The Primer F:TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG R:GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG
After the step Filter and trim, it outputs some fastq files with garbled characters. But I can still go for the following steps and get the taxonomic assignments.
Why is this happening? how to deal with it? if I want to check the filtered files, how to fix these garbled fastq files?
Thanks.
The text was updated successfully, but these errors were encountered: