Skip to content

hevmarriott/gatk-sv

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GATK-SV

A structural variation discovery pipeline for Illumina short-read whole-genome sequencing (WGS) data.

Table of Contents

Deployment and execution:

  • A Google Cloud account.
  • A workflow execution system supporting the Workflow Description Language (WDL), either:
    • Cromwell (v36 or higher). A dedicated server is highly recommended.
    • or Terra (note preconfigured GATK-SV workflows are not yet available for this platform)
  • Recommended: MELT. Due to licensing restrictions, we cannot provide a public docker image or reference panel VCFs for this algorithm.
  • Recommended: cromshell for interacting with a dedicated Cromwell server.
  • Recommended: WOMtool for validating WDL/json files.

Data:

  • Illumina short-read whole-genome CRAMs or BAMs, aligned to hg38 with bwa-mem. BAMs must also be indexed.
  • Indexed GVCFs produced by GATK HaplotypeCaller, or a jointly genotyped VCF.
  • Family structure definitions file in PED format.

Please cite the following publication: Collins, Brand, et al. 2020. "A structural variation reference for medical and population genetics." Nature 581, 444-451.

Additional references: Werling et al. 2018. "An analytical framework for whole-genome sequence association studies and its implications for autism spectrum disorder." Nature genetics 50.5, 727-736.

WDLs

There are two scripts for running the full pipeline:

  • wdl/GATKSVPipelineBatch.wdl: Runs GATK-SV on a batch of samples.
  • wdl/GATKSVPipelineSingleSample.wdl: Runs GATK-SV on a single sample, given a reference panel

Inputs

Example workflow inputs can be found in /inputs. All required resources are available in public Google buckets.

MELT

Important: The example input files contain MELT inputs that are NOT public (see Requirements). These include:

  • GATKSVPipelineSingleSample.melt_docker and GATKSVPipelineBatch.melt_docker - MELT docker URI (see Docker readme)
  • GATKSVPipelineSingleSample.ref_std_melt_vcfs - Standardized MELT VCFs (Module00c)

The input values are provided only as an example and are not publicly accessible. In order to include MELT, these values must be provided by the user. MELT can be disabled by deleting these inputs and setting GATKSVPipelineBatch.use_melt to false.

Requester pays buckets

Important: The following parameters must be set when certain input data is in requester pays (RP) buckets:

  • GATKSVPipelineSingleSample.requester_pays_cram and GATKSVPipelineBatch.Module00aBatch.requester_pays_crams - set to True if inputs are CRAM format and in an RP bucket, otherwise False.
  • GATKSVPipelineBatch.GATKSVPipelinePhase1.gcs_project_for_requester_pays - set to your Google Cloud Project ID if gVCFs are in an RP bucket, otherwise omit this parameter.

Execution

We recommend running the pipeline on a dedicated Cromwell server with a cromshell client. A batch run can be started with the following commands:

> mkdir gatksv_run && cd gatksv_run
> mkdir wdl && cd wdl
> cp $GATK_SV_V1_ROOT/wdl/*.wdl .
> zip dep.zip *.wdl
> cd ..
> cp $GATK_SV_V1_ROOT/inputs/GATKSVPipelineBatch.ref_panel_1kg.json GATKSVPipelineBatch.my_run.json
> cromshell submit wdl/GATKSVPipelineBatch.wdl GATKSVPipelineBatch.my_run.json cromwell_config.json wdl/dep.zip

where cromwell_config.json is a Cromwell workflow options file. Note users will need to re-populate batch/sample-specific parameters (e.g. BAMs and sample IDs).

The pipeline consists of a series of modules that perform the following:

  • Module 00a: SV evidence collection, including calls from a configurable set of algorithms (Delly, Manta, MELT, and Wham), read depth (RD), split read positions (SR), and discordant pair positions (PE).
  • Module 00b: Dosage bias scoring and ploidy estimation
  • Module 00c: Copy number variant calling using cn.MOPS and GATK gCNV; B-allele frequency (BAF) generation; call and evidence aggregation
  • Module 01: Variant clustering
  • Module 02: Variant filtering metric generation
  • Module 03: Variant filtering; outlier exclusion
  • Module 04: Genotyping
  • Module 05/06: Cross-batch integration; complex variant resolution and re-genotyping; vcf cleanup
  • Module 07: Downstream filtering, including minGQ, batch effect check, outlier samples removal and final recalibration;
  • Module 08: Annotations, including functional annotation, allele frequency (AF) annotation and AF annotation with external population callsets;
  • Module 09: Visualization, including scripts that generates IGV screenshots and rd plots.
  • Additional modules to be added: de novo and mosaic scripts

Repository structure:

  • /inputs: Example workflow parameter files for running gCNV training, GATK-SV batch mode, and GATK-SV single-sample mode
  • /dockerfiles: Resources for building pipeline docker images (see readme)
  • /wdl: WDLs running the pipeline. There is a master WDL for running each module, e.g. Module01.wdl.
  • /scripts: scripts for running tests, building dockers, and analyzing cromwell metadata files
  • /src: main pipeline scripts
    • /RdTest: scripts for depth testing
    • /sv-pipeline: various scripts and packages used throughout the pipeline
    • /svqc: Python module for checking that pipeline metrics fall within acceptable limits
    • /svtest: Python module for generating various summary metrics from module outputs
    • /svtk: Python module of tools for SV-related datafile parsing and analysis
    • /WGD: whole-genome dosage scoring scripts
  • /test: WDL test parameter files. Please note that file inputs may not be publicly available.

A minimum cohort size of 100 with roughly equal number of males and females is recommended. For modest cohorts (~100-500 samples), the pipeline can be run as a single batch using GATKSVPipelineBatch.wdl.

For larger cohorts, samples should be split up into batches of ~100-500 samples. We recommend batching based on overall coverage and dosage score (WGD), which can be generated in Module 00b.

The pipeline should be executed as follows:

  • Modules 00a and 00b can be run on arbitrary cohort partitions
  • Modules 00c, 01, 02, and 03 are run separately per batch
  • Module 04 is run separately per batch, using filtered variants (Module 03 output) combined across all batches
  • Module 05/06 and beyond are run on all batches together

Note: Module 00c requires a trained gCNV model.

GATKSVPipelineSingleSample.wdl runs the pipeline on a single sample using a fixed reference panel. An example reference panel containing 156 samples from the NYGC 1000G Terra workspace is provided with inputs/GATKSVPipelineSingleSample.ref_panel_1kg.na12878.json.

Custom reference panels can be generated by running GATKSVPipelineBatch.wdl and trainGCNV.wdl and using the outputs to replace the following single-sample workflow inputs:

  • GATKSVPipelineSingleSample.ref_ped_file : batch.ped - Manually created (see data requirements)
  • GATKSVPipelineSingleSample.contig_ploidy_model_tar : batch-contig-ploidy-model.tar.gz - gCNV contig ploidy model (gCNV training)
  • GATKSVPipelineSingleSample.gcnv_model_tars : batch-model-files-*.tar.gz - gCNV model tarballs (gCNV training)
  • GATKSVPipelineSingleSample.ref_pesr_disc_files - sample.disc.txt.gz - Paired-end evidence files (Module 00a)
  • GATKSVPipelineSingleSample.ref_pesr_split_files - sample.split.txt.gz - Split read evidence files (Module 00a)
  • GATKSVPipelineSingleSample.ref_panel_bincov_matrix: batch.RD.txt.gz - Read counts matrix (Module 00c)
  • GATKSVPipelineSingleSample.ref_panel_del_bed : batch.DEL.bed.gz - Depth deletion calls (Module 00c)
  • GATKSVPipelineSingleSample.ref_panel_dup_bed : batch.DUP.bed.gz - Depth duplication calls (Module 00c)
  • GATKSVPipelineSingleSample.ref_samples - Reference panel sample IDs
  • GATKSVPipelineSingleSample.ref_std_manta_vcfs - std_XXX.manta.sample.vcf.gz - Standardized Manta VCFs (Module 00c)
  • GATKSVPipelineSingleSample.ref_std_melt_vcfs - std_XXX.melt.sample.vcf.gz - Standardized Melt VCFs (Module 00c)
  • GATKSVPipelineSingleSample.ref_std_wham_vcfs - std_XXX.wham.sample.vcf.gz - Standardized Wham VCFs (Module 00c)
  • GATKSVPipelineSingleSample.cutoffs : batch.cutoffs - Filtering cutoffs (Module 03)
  • GATKSVPipelineSingleSample.genotype_pesr_pesr_sepcutoff : genotype_pesr.pesr_sepcutoff.txt - Genotyping cutoffs (Module 04)
  • GATKSVPipelineSingleSample.genotype_pesr_depth_sepcutoff : genotype_pesr.depth_sepcutoff.txt - Genotyping cutoffs (Module 04)
  • GATKSVPipelineSingleSample.genotype_depth_pesr_sepcutoff : genotype_depth.pesr_sepcutoff.txt - Genotyping cutoffs (Module 04)
  • GATKSVPipelineSingleSample.genotype_depth_depth_sepcutoff : genotype_depth.depth_sepcutoff.txt - Genotyping cutoffs (Module 04)
  • GATKSVPipelineSingleSample.PE_metrics : pe_metric_file.txt - Paired-end evidence genotyping metrics (Module 04)
  • GATKSVPipelineSingleSample.SR_metrics : sr_metric_file.txt - Split read evidence genotyping metrics (Module 04)
  • GATKSVPipelineSingleSample.ref_panel_vcf : batch.cleaned.vcf.gz - Final output VCF (Module 05/06)

Both the cohort and single-sample modes use the GATK gCNV depth calling pipeline, which requires a trained model as input. The samples used for training should be technically homogeneous and similar to the samples to be processed (i.e. same sample type, library prep protocol, sequencer, sequencing center, etc.). The samples to be processed may comprise all or a subset of the training set. For small cohorts, a single gCNV model is usually sufficient. If a cohort contains multiple data sources, we recommend clustering them using the dosage score, and training a separate model for each cluster.

The following sections briefly describe each module and highlights inter-dependent input/output files. Note that input/output mappings can also be gleaned from GATKSVPipelineBatch.wdl, and example input files for each module can be found in /test.

Runs raw evidence collection on each sample.

Note: a list of sample IDs must be provided. These IDs should be unique and contain only alphanumeric characters and underscores. They need not match sample names from the BAM/CRAM headers. IDs containing other characters may cause errors. GetSampleID.wdl can be used to fetch BAM sample IDs and also generates a set of alternate IDs that are considered safe for this pipeline. Currently, sample IDs can be replaced again in Module 00c.

Inputs:

  • Per-sample BAM or CRAM files aligned to hg38. Index files (.bai) must be provided if using BAMs.

Outputs:

  • Caller VCFs (Delly, Manta, MELT, and/or Wham)
  • Binned read counts file
  • Split reads (SR) file
  • Discordant read pairs (PE) file
  • B-allele fraction (BAF) file

Runs ploidy estimation, dosage scoring, and optionally VCF QC. The results from this module can be used for QC and batching.

For large cohorts, we recommend dividing samples into smaller batches (~500 samples) with ~1:1 male:female ratio.

We also recommend using sex assignments generated from the ploidy estimates and incorporating them into the PED file.

Prerequisites:

Inputs:

Outputs:

  • Per-sample dosage scores with plots
  • Ploidy estimates, sex assignments, with plots
  • (Optional) Outlier samples detected by call counts

Trains a gCNV model for use in Module 00c. The WDL can be found at /gcnv/trainGCNV.wdl.

Prerequisites:

Inputs:

Outputs:

  • Contig ploidy model tarball
  • gCNV model tarballs

Runs CNV callers (cnMOPs, GATK gCNV) and combines single-sample raw evidence into a batch. See above for more information on batching.

Prerequisites:

Inputs:

  • PED file (updated with Module 00b sex assignments)
  • Per-sample GVCFs generated with HaplotypeCaller (gvcfs input), or a jointly-genotyped VCF (position-sharded, snp_vcfs input)
  • Read count, BAF, PE, and SR files (Module 00a)
  • Caller VCFs (Module 00a)
  • Contig ploidy model and gCNV model files (gCNV training)

Outputs:

  • Combined read count matrix, SR, PE, and BAF files
  • Standardized call VCFs
  • Depth-only (DEL/DUP) calls
  • Per-sample median coverage estimates
  • (Optional) Evidence QC plots

Clusters SV calls across a batch.

Prerequisites:

Inputs:

Outputs:

  • Clustered SV VCFs
  • Clustered depth-only call VCF

Generates variant metrics for filtering.

Prerequisites:

Inputs:

Outputs:

  • Metrics file

Filters poor quality variants and filters outlier samples.

Prerequisites:

Inputs:

  • Batch PED file
  • Metrics file (Module 02)
  • Clustered SV and depth-only call VCFs (Module 01)

Outputs:

  • Filtered SV (non-depth-only a.k.a. "PESR") VCF with outlier samples excluded
  • Filtered depth-only call VCF with outlier samples excluded
  • Random forest cutoffs file
  • PED file with outlier samples excluded

Combines filtered variants across batches. The WDL can be found at: /wdl/MergeCohortVcfs.wdl.

Prerequisites:

Inputs:

Outputs:

  • Combined cohort PESR and depth VCFs
  • Cohort and clustered depth variant BED files

Genotypes a batch of samples across unfiltered variants combined across all batches.

Prerequisites:

Inputs:

  • Batch PESR and depth VCFs (Module 03)
  • Cohort PESR and depth VCFs (Merge Cohort VCFs)
  • Batch read count, PE, and SR files (Module 00c)

Outputs:

  • Filtered SV (non-depth-only a.k.a. "PESR") VCF with outlier samples excluded
  • Filtered depth-only call VCF with outlier samples excluded
  • PED file with outlier samples excluded
  • List of SR pass variants
  • List of SR fail variants
  • (Optional) Depth re-genotyping intervals list

Re-genotypes probable mosaic variants across multiple batches.

Prerequisites:

Inputs:

  • Per-sample median coverage estimates (Module 00c)
  • Pre-genotyping depth VCFs (Module 03)
  • Batch PED files (Module 03)
  • Clustered depth variant BED file (Merge Cohort VCFs)
  • Cohort depth VCF (Merge Cohort VCFs)
  • Genotyped depth VCFs (Module 04)
  • Genotyped depth RD cutoffs file (Module 04)

Outputs:

  • Re-genotyped depth VCFs

Combines variants across multiple batches, resolves complex variants, re-genotypes, and performs final VCF clean-up.

Prerequisites:

Inputs:

Outputs:

  • Finalized "cleaned" VCF and QC plots

Module 07 (in development)

Apply downstream filtering steps to the cleaned vcf to further control the false discovery rate; all steps are optional and users should decide based on the specific purpose of their projects.

Filterings methods include:

  • minGQ - remove variants based on the genotype quality across populations. Note: Trio families are required to build the minGQ filtering model in this step. We provide tables pre-trained with the 1000 genomes samples at different FDR thresholds for projects that lack family structures, and they can be found here: gs://gatk-sv-resources-public/hg38/v0/sv-resources/ref-panel/1KG/v2/mingq/1KGP_2504_and_698_with_GIAB.10perc_fdr.PCRMINUS.minGQ.filter_lookup_table.txt gs://gatk-sv-resources-public/hg38/v0/sv-resources/ref-panel/1KG/v2/mingq/1KGP_2504_and_698_with_GIAB.1perc_fdr.PCRMINUS.minGQ.filter_lookup_table.txt gs://gatk-sv-resources-public/hg38/v0/sv-resources/ref-panel/1KG/v2/mingq/1KGP_2504_and_698_with_GIAB.5perc_fdr.PCRMINUS.minGQ.filter_lookup_table.txt

  • BatchEffect - remove variants that show significantly higher than expected differences between different batches

  • FilterOutlierSamples -remove outlier samples with extremely high or low number of SVs

  • FilterCleanupQualRecalibration - modify filter columns for easier interpretation

Module 08 (in development)

Add annotations, such as the inferred function and allele frequencies of variants, to final vcf.

Annotations methods include:

  • Functional annotation - annotate SVs with inferred function on protein coding regions, regulatory regions such as UTR and Promoters and other non coding elements;
  • Allele Frequency annotation - annotate SVs with their allele frequencies across all samples, and samples of specific gender, as well as specific sub-populations.
  • Allele Frequency annotation with external callset - annotate SVs with the allele frequencies of their overlapping SVs in another callset, eg. gnomad SV callset.

Module 09 (in development)

Visualize SVs with IGV screenshots and read depth plots.

Visualization methods include:

  • RD Visualization - generate RD plots across all samples, ideal for visualizing large CNVs.
  • IGV Visualization - generate IGV plots of each SV for individual sample, ideal for visualizing de novo small SVs.
  • Module09.visualize.wdl - generate RD plots and IGV plots, and combine them for easy review.

VM runs out of memory or disk

  • Default pipeline settings are tuned for batches of 100 samples. Larger batches or cohorts may require additional VM resources. Most runtime attributes can be modified through the RuntimeAttr inputs. These are formatted like this in the json:
"ModuleX.runtime_attr_override": {
  "disk_gb": 100,
  "mem_gb": 16
},

Note that a subset of the struct attributes can be specified. See wdl/Structs.wdl for available attributes.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • WDL 35.5%
  • Python 25.8%
  • R 21.1%
  • Shell 16.3%
  • Other 1.3%