Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor performance when reading gzipped FASTQ sequentially #12

Closed
afrubin opened this issue Mar 12, 2020 · 6 comments
Closed

Poor performance when reading gzipped FASTQ sequentially #12

afrubin opened this issue Mar 12, 2020 · 6 comments

Comments

@afrubin
Copy link

afrubin commented Mar 12, 2020

As part of publishing my pure Python FASTQ/FASTA package I ran some benchmarks/comparisons with pyfastx, which you can find in the package documentation: https://fqfa.readthedocs.io/en/latest/benchmark.html

pyfastx was unexpectedly slow when processing a gzipped FASTQ file sequentially, although it did fine when the file was uncompressed. I don't know what the root cause of this might be (maybe it's getting a new block for each read?), but I wanted to bring it to your attention.

@lmdu
Copy link
Owner

lmdu commented Mar 12, 2020

Thank you for your posting. Pyfastx aims to enable random access to reads from FASTQ files which depends on an index file. If index file does not exists, pyfastx will build an index file when the FASTQ file is first opened. This step may consume much more time. When you open file again and read sequences, it will be very fast to iterate sequences and comparable to iteration without index. For gzipped file, in addition to positional index, it also will build seek point index for accelerating gzip seek operation.

@afrubin
Copy link
Author

afrubin commented Mar 19, 2020

Thanks for the quick reply.

When nothing is done with the reads except adding them to a list, pyfastx performs as expected. However, when the read attributes are accessed while processing the file (e.g. to filter based on quality) performance is very poor.

In my testing, the uncompressed file took 1m 5s wall time to build the index and process the full file, but the gzip-compressed version reliably failed to complete in less than an hour. This seemed to be the case whether or not a new index was built.

@lmdu
Copy link
Owner

lmdu commented Mar 19, 2020

Thank you for reporting this hidden performance problem!
For iteration of reads from gzipped FASTQ file, each read was read by seeking to the start position and then read content using zran_read in zran.c. However, zran_read has a low performance for IO intensive operation which I didn't notice before. In later version, I will add a buffer reader to improve the performance. Thanks again!

@afrubin
Copy link
Author

afrubin commented Mar 19, 2020

No problem! Let me know when this is implemented and I'll happily update the examples in my package documentation.

@lmdu
Copy link
Owner

lmdu commented Apr 22, 2020

We have improved the speed of reading sequence from gzipped FASTQ file in new version 0.6.10

@lmdu lmdu closed this as completed Apr 26, 2020
@afrubin
Copy link
Author

afrubin commented Aug 26, 2020

@lmdu Just wanted to let you know that I've re-run the benchmarks for my package as part of a new release and pyfastx performance is greatly improved!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants