-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In CPU mode, smaller chunk_size does not reduce the memory use. #13
Comments
I seems met similar problem, as I used 4 threads and 256 chunk size, which should use a little memory right? as 20 thread with 1024 chunk size will only use about 20G memory as it described. However, my PC was stuck and my task manager showed nearly all memory(16G) were consumed, please figured out this problems. Thanks very much! |
Thank you for your interest in RiboDetector. I am working on the new version. Next release will solve this issue. BTW, how large is your input fastq file (number of nucleotides)? |
Fixed issue #13 and updated help message by merging
@xhxlilium This issue has been solved in the latest version v0.2.6. It can be updated with pip. When running on your large input files, you can use smaller |
The memory use seems to be not related to the chunk_size setting but the total number of input sequence bases. Need to find a way to reduce the memory use for large input files.
The text was updated successfully, but these errors were encountered: