This repo contains the MEDI nextflow pipeline(s) for recovering food abundances and nutrient composition from metagenomic shotgun sequencing data.
It contains capabilities for the following individual functionalities:
- Mapping items in FOODB to all currently available items in NCBI NIH databases and prioritizing hits
- Downloading, consolidating, ANI distance calculation using minhasing, for all full and partial assemblies and annotation of NCBI Taxonomy IDs for use with Kraken2.
- Building the Kraken2 and BRACKEN hashes/databases including decoy sequences.
- Perform quantification of food DNA and per-portion nutrient composition for metagenomic samples, starting from raw FASTQ files.
You will need a working miniforge or miniconda to start. YOu can follow the installation instructions here. After create an environment with the included conda environment file.
Start by cloning the repository and cd-ing into it.
git clone https://github.com/gibbons-lab/medi
cd medi
conda env create -n medi -f medi.yml
After that activate the environment.
conda activate medi
And you are done. If you are running this on a HPC cluster or a cloud provider, you moght need to adjust your nextflow settings for your setup.
All pipelines support a --max_threads
parameter that defines the maximum number of threads
to use for any single process.
nextflow run -resume database.nf
This will boostrap the database from nothing, downloading all required files and performing
the matching against the current versions of NCBI Genbank and Nucleotide. You can speed up the
querying by obtaining and NCBI API key and adding it to your .Rprofile
with
options(reutil.api.key="XXXXXX")
After running the previous step continue with
nextflow run build_kraken.nf --max_db_size=500
Here --max_db_size
denotes the maximum size of the database in GB. The default
will use no reduction but you can set this to a lower level which will create a
smaller but less accurate hash. Note that for good performance you will need more
RAM than what you choose here.
Note that this step of the pipeline will not work with the -resume
option. The add_*
need
to finish completely or the pipeline needs to be restarted from the beginning. Should this
work and the later steps crash, you can trigger just the hash building using the
--rebuild
option which will rebuild the database but not attempt to add sequences again.
For your own sequencing data create a directory and either copy or link the medi
pipeline there. You will need at least the quant.nf
file and the scripts
folder in there.
Than create a data
directory and within that a raw
folder containing your unprocessed
demultiplexed FASTQ files. So it should look like:
|- quant.nf
|- scripts/
|- data/
|-- raw/
After that you can run MEDI with
nextflow run quant.nf -resume --db=/path/to/medi_db
Where /path/to/medi_db/
should be the output directory from step (3). Usually medi/data/medi_db
.
- see if we can provide a reduced DB for download
- make execution more flexible
- add resource limits for individual steps for Grid clusters