You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As an initial step, FASTQ files are filtered by length and if the file size is too small, the FASTQ is not processed any further. It would be good to have this somehow reported.
E.g. I just tested a FAST5 run (V3 primers) that resulted in 24 barcoded FASTQ and it seems 9 of them were sorted out and not processed any further.
It would be good to have a TSV with e.g. all IDs and a column that states which were sorted out due to low number of reads after filtering.
The text was updated successfully, but these errors were encountered:
the size is so small for "removal" it should usually only remove barcodes that were falsely assigned via "guppy demultiplex" with the "one barcode only" option. so not sure if this would confuse more?
okay I see, let me do some checks and maybe you are right and we don't need this. It might be just confuse as well when people used 10 barcodes but only get 9 consensuses out (e.g. bc/ one barcode did not work well and produced only a handful reads).
But in such a case, one can also go back to the pycoQC and check the assigned barcode distribution, ...
As an initial step, FASTQ files are filtered by length and if the file size is too small, the FASTQ is not processed any further. It would be good to have this somehow reported.
E.g. I just tested a FAST5 run (V3 primers) that resulted in 24 barcoded FASTQ and it seems 9 of them were sorted out and not processed any further.
It would be good to have a TSV with e.g. all IDs and a column that states which were sorted out due to low number of reads after filtering.
The text was updated successfully, but these errors were encountered: