-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
REFERENCE: Running the pipeline with 50 samples. #4
Comments
Could you provide more details about the error? I also recommend to run the pipeline with the new update and contact if the problem persist in this new issue. I've adapted the output plotting scripts to a big number of samples. |
|
####### i just run 34 samples ,the problems is coming, |
/ /| / // / / / / // / / // // // // // / NanoCLUST v1.0devRun Name : ridiculous_kare
|
Hi, thank you for the logs. I've found some issues when running the pipeline setting the parameters min_cluster_size and polishing_reads with too low values. This may be a cause of problem in canu/racon/medaka processes. The values assigned in the test profile (50 and 20) are too low and could be not suitable for real samples (no mock community) and bigger files. I recommend to set polishing_reads with 500-1000 and also provide a higher min_cluster_size value (100-300) and see if you don't get an error. Thank you for your time and testing the tool. |
ok,thans you,i will try, |
i have changed the parament nextflow run NanoCLUST/main.nf --reads 'no/*.fastq' -profile docker --db db/16S_ribosomal_RNA --tax db/taxdb/ --polishing_reads 500 --min_cluster_size 200 --outdir result_nanopore22222 -name nanaopore1all2
|
This keeps happening when min_cluster_size is 50 and polishing_reads is still 500? An excessive number of clusters due to low min_cluster_size is 50 could kill the conda env. If you are still getting this error I would try even higher values than 50 for min_cluster_size. |
Hi, I've been inspecting your log and the error 137 you are getting in the process is because it ran out of memory. I've updated the nextflow.config file to use 8GB initially and try with more RAM if a process fail due to this error. We recommend at least 16GB of RAM in your machine. I hope this time you dont get memory errors. EDIT: now fixing an issue with that commit. I will update when fixed |
ok, |
The issue with the commit is fixed and the memory adjustments for these processes have changed. EDIT: Before the commit, the memory per process was limited up to 7GB. Even if your machine has enough memory, the process would fail. I’m working also on limit the processes that can be run at certain pipeline stages to improve multiple sample files per run. We have tested it with 12 samples. |
hi, the problems is that run the pipline with 1 sample is perfectct,but my data has 50 samples,it always occurs error ,when i run the 50 samples with the parameter "--reads 'my path/*.fastq" .
Originally posted by @HaiyangDu in #1 (comment)
The text was updated successfully, but these errors were encountered: