-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Allow resuming in fasterq-dump #841
Comments
Did you run |
Hello, thank you for replying. I still find value in implementing a resume feature, as the extraction process takes longer compared to downloading the sra file. Moreover, it might fail midway due to reasons such as running out of tmp/ storage. However, I leave it up to you to decide whether to close this issue. |
how did you fix im also similar situation where i have like more than 1000 files to download but it fails after 300-350 files, even i use prefetch first so in my case the prefetch is not able to get the given ID. Is it a space issue ? " it might fail midway due to reasons such as running out of tmp/ storage." can you explain |
I'm using fasterq-dump to download and extract fastq files of all runs (~250 files, 50GB each) of project PRJEB31266
However I find it very flaky, failing multiple times. Retrying it cause to download & extract the entire file from scratch which is wasteful in terms of bandwidth and processing.
It would be nice if it had something similar to
--resume
flag present inprefetch
The text was updated successfully, but these errors were encountered: