Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

optimal short read data #9

Open
dcopetti opened this issue Dec 20, 2018 · 2 comments
Open

optimal short read data #9

dcopetti opened this issue Dec 20, 2018 · 2 comments

Comments

@dcopetti
Copy link

Hello,

I am about to run HG-CoLoR, but first I wonder if there is a preferred format/coverage of the Illumina data.
I have a PE 470 bp library (2x260 bp) and a PE 700 bp (2x150). I am going to correct about 100 GB of PromethION data (~20x of a plant genome).
I wonder if I should trim the short reads, remove the overlapping part in the PE 470, keep sequences with the same length, optimal coverage and such.
Thanks
Dario

@dcopetti
Copy link
Author

Hi, can you please address my inquiry above?
Thanks!

@morispi
Copy link
Owner

morispi commented Jan 10, 2019

Hello,

Sorry for not answering before, I was in Christmas break and wanted to take a real work-free break before I start writing my thesis.

PE libraries don't matter to much to HG-CoLoR, as it does not make use of that information. However, experiences I ran shown that using smaller short reads (125bp over 250-300 bp) provided slightly better results. This results are shown in the Supplementary Material of the paper (https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/bioinformatics/34/24/10.1093_bioinformatics_bty521/1/bty521_supplementary_data.pdf?Expires=1547204857&Signature=tmdkoxqJvTi84m82mOwoLHRaNEg4M5sfoqPVF48xQAsUVs5d10DZQB2qAWjLFXmKMu5DYif6LfZ65p69fPHLhSAU81ygTrravdxft2GQJXwZXj7fg~sdNUd5BxuK8EcTfttc2dkCmQNysecrRStshrT5TAZweMo-n22DAEym9RDqlNAdMJf0B2A9LaUs-o-l24Nscy5rH-icSa9nsUoZCMpuSChp8Ttfm28YeWgXx~x2m4Q-Hexs~0rfopRRk9MnWGJ~AIHLnRX7YAF~GQtUmX8YDE-EkKrY6Mj~eZUkV9tDvr-4PRsH611nzV73x1WUwXlyICtrVZ3ND4a2n9ihCA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA) Tables S6, S7, and S8. Only tested the tool up to the C. elegans genome in these experiments however, so picking the larger 260 bp SRs might be helpful to better cover repeats in your case.

No need to trim the SR, nor remove the overlapping parts or anything. The HG-CoLoR pipeline will itself correct the SR (with QuoRUM, which is fast) to achieve high quality correction.

As for the coverage, I usually use about 50x coverage of SR, using more than that did not shown significantly better results in my experiments. If you still wish to use a higher SR coverage however, I highly recommand you to lower the --bestn parameter, and increase the --solid parameter to avoid prohibitive runtimes and miscorrections.

For the SR/LR alignment step, a known issue is also that BLASR does not allow the reference file (in the case, the LR file) to be higher than 4Go. You will thus have to split your LR file into separate 4Go files, and run separate HG-CoLoR instances on each file. This will not affect the correction results, as each LR is processed independently during correction. I know that this is however quite impractical, and investigating to find a better aligner is on my TODOlist.

Moreover, another known issue is that the graph construction with PgSA tends to take very, very long time when the size of the SR file grows bigger, as it does not support multithreaded construction. So using a 50x coverage for your plant genome might take quite some time. Again, this is a known issue, and replacing PgSA with a proper FM-index, allowing parallel construction, is also on my TODOlist.

Can't promise you when the update will be done, as writing my thesis currently takes quite a lot of my time. Demands for running HG-CoLoR on large genomes however become frequent, so I might take so time to do it the next few weeks, as this is a blocking point for every large experiment.

Best,
Pierre

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants