A python toolkit providing best-practice pipelines for fully automated high throughput sequencing analysis. You write a high level configuration file specifying your inputs and analysis parameters. This input drives a parallel pipeline that handles distributed execution, idempotent processing restarts and safe transactional steps. The goal is to provide a shared community resource that handles the front end data processing component of sequencing analysis, allowing us to focus on the downstream biology.
bcbio-nextgenwith all tool dependencies and data files:
wget https://raw.github.com/chapmanb/bcbio-nextgen/master/scripts/bcbio_nextgen_install.py python bcbio_nextgen_install.py install_directory data_directory
producing a system configuration file referencing the installed software and data.
Edit a sample configuration file to describe your samples.
Run analysis, distributed across 8 local cores:
bcbio_nextgen.py bcbio_system.yaml bcbio_sample.yaml -n 8
See the full documentation at ReadTheDocs.
The pipeline implements the GATK best practice guidelines for variant calling, which includes:
- Base Quality Recalibration
- Realignment around indels
- Variant calling. The pipeline supports:
- Quality filtering, using both GATK's Variant Quality Score Recalibrator and hard filtering.
- Annotation of effects, using snpEff
The pipeline runs on single multicore machines, in compute clusters managed by LSF or SGE using IPython parallel, or on the Amazon cloud. This tutorial describes running the pipeline on Amazon with CloudBioLinux and CloudMan.
The scripts can be tightly integrated with the Galaxy web-based analysis tool. Tracking of samples occurs via a web based LIMS system, and processed results are uploading into Galaxy Data Libraries for researcher access and additional analysis. See the installation instructions for the front end and a detailed description of the full system.