SARS-CoV-2 data analysis
SARS-CoV-2 analysis pipeline for short-read, paired-end sequencing.
A Makefile is part of the code that installs all dependencies using bioconda.
git clone --recursive https://github.com/tobiasrausch/covid19.git
Preparing the reference databases and indexes
There is a script to download and index the SARS-CoV-2 and GRCh38 reference sequence.
cd ref/ && ./prepareREF.sh
There is another script to prepare the kraken2 human database to filter host reads.
cd kraken2/ && ./prepareDB.sh
Running the data analysis pipeline
There is a run script that performs adapter trimming, host read removal, alignment, variant calling and annotation, consensus calling and some quality control. The last parameter, called
unique_sample_id, is used to create a unique output directory in the current working directory.
./src/run.sh <read.1.fq.gz> <read.2.fq.gz> <unique_sample_id>
The main output files are:
The adapter-trimmed and host-filtered FASTQ files:
The alignment to SARS-CoV-2:
The consensus sequence:
The annotated variants:
The assigned lineage:
The summary QC report:
The above pipeline generates a report for every sample. It can be naively parallelized on the sample level. You can then aggregate all the QC information and the lineage & clade assignments using
./src/aggregate.sh outtable */*.qc.summary
You can estimate cross-contamination based on the allelic frequencies of variant calls using
./src/crosscontam.sh contam */*.bcf
This works best on good quality consensus sequences, i.e.:
./src/crosscontam.sh contam grep "RKI pass" /.qc.summary | sed 's/.qc.summary.*$/.bcf/' | tr '\n' ' '`
The repository contains an example script using a COG-UK data set.
cd example/ && ./expl.sh
Many thanks to the open-science of COG-UK, their data sets in ENA were very useful to develop the code. The workflow uses many tools distributed via bioconda, please see the Makefile for all the dependencies and of course, thanks to all the developers.