Skip to content

MetaPathways v1.0 Installation

KanHC edited this page Jul 31, 2013 · 25 revisions

1. Downloading MetaPathways

IMAGE ALT TEXT HERE

Download the zip file MetaPathways_v1.0.zip from http://hallam.microbiology.ubc.ca/MetaPathways/ or the GitHub releases page GitHub releases page. After you have downloaded the file, unzip and inspect the contents of the MetaPathways/ folder (Figure 1).

The MetaPathways/ folder

Figure 1 - An example of the MetaPathways/ folder from the MetaPathways_v1.0.zip file. Notice that the folder has a number of different files and folders inside it. The template configuration (template_config.txt) and parameter configuration (template_param.txt) files are used to configure and set parameter settings of each of the analytical steps of the pipeline. Additionally, the Python script, MetaPathways.py, is used to start the pipeline.

A Tour of the MetaPathways/ folder:

  • blastDB/ - place where BLAST databases are stored along with name-mapping and taxonomic support files for specific databases like KEGG and COG
  • daemon.py - a script that carries out external operations on super-computing grids using the Sun Grid engine
  • executables/ - contains various analytical and data handling programs that process the inputs and outputs of different steps of the pipeline e.g. BLAST, Prodigal, trna-scan, etc.
  • libs/ - the code library folder contains different Perl and Python functions and code that coordinate different steps of the pipeline
  • MetaPathways.py - the starter script/program that runs the pipeline with specific config- uration and parameter settings for each of the steps
  • MetaPathwaysrc - a source file that must be run to ensure that the computer system knows where the MetaPathways/ folder, sets the local python and perl paths, and compiles some executable code
  • template_config.txt - a parameter file that specifies the analytical settings for all pipeline steps. e.g. BLAST cut-offs, steps to include in a run of the analysis, what order to annotate databases in, etc.
  • template_header.txt - a template header for output GenBank (.gbk) files
  • template_param.txt - a parameter file that specifies the analytical settings for all pipeline steps. e.g. BLAST cut-offs, steps to include in a run of the analysis, what order to annotate databases in, etc.
  • testdata/ - contains some simple .fasta files to do a dry-run to ensure that everything in the pipeline is working properly

For simplicity we are going to perform this installation out of the user home folder /User/[username]/ by default. In unix commands the tilde ~ character is equivalent to your home directory. In OSX systems the home folder can be found through any of the following:

  • Double-click the “Macintosh HD” on the Desktop
  • Right-click (control-click) the “Finder” icon in the Dock and select “New Finder Window”
  • Left-click the “Finder” icon and press (command + n)
  • Go to home from any finder folder by pressing (shift + command + h)

Drag-and-drop the newly extracted MetaPathways v1.0/ folder into the home directory. It should sit as ~/MetaPathways/ when accessing it through the terminal.

MetaPathways requires the use of the unix command-line terminal to run. On OSX systems this is done through the “Terminal” program located in:

  • Applications > Utilities > Terminal

You may want to place this program on your OSX Dock for future convenience.

2. Installing programming languages Python, Perl, and GCC.

IMAGE ALT TEXT HERE

Install the required Python 2.x, Perl 5.x, and GCC compiler. For OSX users, these are all contained within the current release of Xcode4 which can be obtained for free from https://developer.apple.com/xcode/ or on the Apple App Sore within modern releases of OSX. Alternatively, Perl, and Python installation files and documentation can be obtained from their respective websites:

These also can be obtained through a package management system like Synaptic. Though in the case of many Unix distributions, like the popular Ubuntu, versions of Python, Perl, and GCC are included by default, but you will want to ensure that they are the proper versions.

In many instances, installing new programming languages is quite low-level from an OS perspective, and may require some discussion with your local system administrator. A restart of the computer might also be required. It is also a good idea to open the terminal after installation to check if these installations made it to your system’s $PATH variable using the which command:

# tests to see if a perl, python, or gcc are included in your $PATH variable
$ which perl
/usr/bin/perl
$ which python
/usr/bin/python
$ which gcc
/Developer/usr/bin/gcc

3. Install Pathway Tools

IMAGE ALT TEXT HERE

One of the final steps of the MetaPathways pipeline uses the software Pathway Tools to build a Pathway/Genome Database (PGDB) from environmental nucleotide sequences. The Pathway Tools software can be obtained directly from SRI International and will re- quire obtaining an academic licence for the software (http://biocyc.org/download.shtml). This is free for academic users and usually takes approximately 1-2 business days to approve. Problems with licensing can be emailed to ptools-info@ai.sri.com. SRI Interna- tional provides installation instructions for OSX and Unix, and is extensively documented at its homepage: http://bioinformatics.ai.sri.com/ptools/. Eventually you receive an email from the Pathway Tools group that will allow you to download the Pathway Tools software (Figure 2).

Table of the available versions of Pathway Tools

Figure 2 - Table of the available versions of Pathway Tools. For most people starting out, the versions circled in red, just containing EcoCyc and MetaCyc, will be sufficient. Additional databases from within the BioCyc umbrella are available for download individually through the internal P2P function of Pathway Tools.

In short, you will obtain an install file like pathway-tools-17.0-macosx-tier1-install.dmg and upon mounting this folder to the desktop a folder with a file that starts an installation wizard (Figure 3).

The Pathway Tools 16.0 install wizard for OSX.

Figure 3 - The Pathway Tools 16.0 install wizard for OSX. We recommend that installation defaults are followed, placing the pathway-tools and ptool-local directories in their default location of the user root folder. On typical Mac OSX installations these are ~/pathway-tools and ~/ptools-local, respectively. For ease of instruction we encourage the use of the default installation locations of Pathway Tools directories in the standard home folder locations: ~/pathway-tools and ~/ptools-local.

For ease of instruction we encourage the use of the default installation locations of Pathway Tools directories in the standard home folder locations: ~/pathway-tools and ~/ptools-local.

  • pathway-tools/ contains the actual Pathway Tools software
  • ptools-local/ contains the PGDBs once they have been built via the MetaPathways pipeline

On OSX systems the a window during the Pathway Tools installation will prompt installa- tion of xQuartz. This will download an additional .dmg file to install xQuartz. Allow the installation of xQuartz to finish before continuing with the Pathway Tools installation. On some systems, installation of xQuartz may require a manual restart. Please restart your system prior to running Pathway Tools for the first time.

After installing Pathway Tools you can launch it from the terminal by executing the following from the command line:

$ cd  ̃
$ ./pathway-tools/pathway-tools

Or from the shortcut icons that it placed on your desktop during installation.

4. BLAST Databases

IMAGE ALT TEXT HERE

The Basic Local Alignment Search Tool (BLAST) is used for a number of pipeline steps; specifically the Open Reading Frame (ORF) functional annotation and the taxonomic identification of sequences through RNA homology. In order to perform this step locally you need a copy of some sequence reference databases to search. We have provided a few databases to get started:

  • MetaCyc (metacyc-v5-2011-10-21) a sub-set of Uniprot corresponding with the se- quences in the MetaCyc database. This is included with the Pathway Tools software (uniprot-seq-ids.seq) just reformatted into the common .fasta format
  • Cluster of Orthologous Groups of proteins (COG 2013-02-05) A protein database containing taxonomically specific clusters of functional proteins
  • Silva LSU (LSURef 111 tax silva) LSU rRNA nucleotide sequences for taxonomic identification

However, the choice of database often depends on the specific scientific question you are asking. As such, many databases are freely maintained for download from public ftp servers. However, these databases are large and they grow in size every day. Downloads add up to many gigabytes (GBs), so a high-speed internet connection will be required. Also many of these are hosted on file transfer protocol (ftp) servers, we recommend Cyberduck http://cyberduck.ch as a free, simple, and user-friendly ftp client.

By default, MetaPathways is configured to detect databases in the blastDB/ folder. Below we outline some basic instructions for obtaining other popular databases for metagenomic analysis.

Protein Databases

RefSeq

RefSeq is a major protein reference database maintained by the National Center of Biotech- nology Information (NCBI) http://www.ncbi.nlm.nih.gov/RefSeq/. Refseq provides formatted BLAST databases on its ftp server:

  • connect to the BLAST database ftp server ftp://ftp.ncbi.nlm.nih.gov/blast/db
  • download the set of files named refseq protein.XX.tar.gz, where XX are numbers
  • extract the .tar.gz archives (usually by simply double-clicking on them)
  • MetaPathways actually requires the original fasta sequences of the RefSeq database to start. Extract the sequences from the RefSeq protein BLAST database using the blastdbcmd or the older fastacmd:
$ blastdbcmd -db refseq protein -dbtype prot -outfmt %f -out Refseq 2013
$ fastacmd -D 1 -d refseq protein -o Refseq 2013

Both the blastdccmd and the legacy fastacmd can be found from the BLAST Software and Databases website provided by the NCBI.

KEGG

The Kyoto Encyclopedia of Genes and Genomes http://www.genome.jp/kegg/ and http://www.bioinformatics.jp/en/keggftp.html. MetaPathways is configured to handle KEGG annotations and provide summary tables. Unfortunately, KEGG now requires a subscription fee to access its databases. However, once sequences are obtained they can be simply placed in the blastDB/ folder.

Nucleotide Taxonomic Databases

Silva

Silva is a comprehensive ribosomal database project.

  • Visit the Silva website http://www.arb-silva.de/download/
  • navigate links: Download > Archive > Current > Exports
  • download the current SSU database (SSURef 111 NR tax silva.fasta.tgz) and the current LSU database (LSURef 111 tax silva.fasta.tgz)

GreeneGenes 16S rRNA gene database and workbench compatible with ARB.

Once again, one need only download the databases in .fasta format in place them in the blastDB/ folder. MetaPathways is programmed to do automatic formatting of them on-the-fly.

5. Configuring the template_config.txt

IMAGE ALT TEXT HERE

The template_config.txt file configures the pipeline to find the resources it needs to run. Paths will have to be set for the PERL_EXECUTABLE, PYTHON_EXECUTABLE, PATHOLOGIC_EXECUTABLE, REFDBS, and METAPATHWAYS_PATH.

Direct the Terminal to the MetaPathways/ folder and source the MetaPathwaysrc file compiling the Perl and Python code and locating Perl, Python and the MetaPathways directory for the config file:

$ cd MetaPathways/
$ source MetaPathwaysrc
Checking for Python and Perl:
Python found in /usr/bin/python
Please set variable PYTHON_EXECUTABLE in file template_config.txt as:
PYTHON_EXECUTABLE /usr/bin/python
Perl found in /usr/bin/perl
Please set variable PERL_EXECUTABLE in file template_config.txt as:
PERL_EXECUTABLE /usr/bin/perl
Adding installation folder of MetaPathways to PYTHONPATH
Your MetaPathways is installed in :
Please set variable METAPATHWAYS_PATH in file template_config.txt as:
METAPATHWAYS_PATH /Users/username/MetaPathways

Follow the printed instructions and update the PYTHON_EXECUTABLE, PERL_EXECUTABLE, METAPATHWAYS_PATH, PATHOLOGIC_EXECUTABLE, and SYSTEM keyword in template_config.txt (Figure 4). The METAPATHWAYS_PATH and PATHOLOGIC_EXECUTABLE represent the absolute paths to MetaPathways and Pathways Tools, respectively.

An example of how to edit the template config.txt file for MetaPathways setup.

Figure 4 - An example of how to edit the template_config.txt file for MetaPathways setup. In most cases, one only needs to edit the PYTHON_EXECUTABLE, PERL_EXECUTABLE, METAPATHWAYS_PATH, the PATHOLOGIC_EXECUTABLE, and then replace the SYSTEM keyword with ether mac, linux, or win depending on the operating system. These fields are highlighted in the red boxes on the left, and potential changes in blue boxes on the right during an example setup on the for a Mac OSX operating system.

6.Configuring the template_param.txt

IMAGE ALT TEXT HERE

The template_param.txt file defines the parameter settings of all the analytical steps in a MetaPathways run. It needs to be updated with the exact names of your protein and nucleotide databases in the blastDB/ folder (Figure 5).

The template_param.txt file.

Figure 5 - The template_param.txt file. The exact names of the BLAST databases need to be listed in the above highlighted lines. These must be the exact names of the database sequence files in the blastDB/ folder.

7. Connecting with the Grid (optional)

MetaPathways has capability to externalize computationally heavy tasks like protein BLAST searches to super computing facilities, provided they use the Sun Grid Engine. This is an optional, but highly recommended step. However, this requires having ssh access and sufficient user permissions to set up password-less access on a compute server. This might be a good time to check with your local system administrator and ask if this kind of setup is permissible. We’ve outlined some basic steps of this process:

  1. test to see if you can connect to your account via ssh:
$ ssh username@server.address.com
  1. You should be asked for your password
  2. check to see there is a .ssh/ folder in your remote home directory
$ ls ~/.ssh/
$ authorized keys known hosts
  1. if not you should create it
$ mkdir ~/.ssh/
  1. return to your local computer (control + d)
  2. navigate to the local ~/.ssh/ directory
$ cd ~/.ssh/
  1. run ssh-key to create a RSA public and private key
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_rsa.
Your public key has been saved in id_rsa.pub.
Enter file in which to save the key (/Users/username/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_rsa.
Your public key has been saved in id_rsa.pub.
  1. Copy your public key to your grid .ssh/ folder with scp
$ scp id rsa.pub user@user.server.com: ̃/.ssh/
  1. Log back in to your external server account using ssh
$ ssh username@server.address.com
  1. Navigate to the ~/.ssh/ directory again
$ cd ~/.ssh/
  1. append the public key to a file called authorized keys
$ cat id rsa.pub >> authorized keys
  1. change the permissions of the authorized keys file and .ssh/ directory such that only your username can read/write it
$ chmod 600  ̃/.ssh/authorized keys
$ chmod 700  ̃/.ssh/
  1. logout to your local computer pressing (control + d)
  2. again try to login using ssh, you should not need to type in your password this time
$ ssh username@server.address.com

If this above procedure did not help then you likely have a more complicated setup on your hands. At this point it would be good to speak with a local system administrator to help you setup keyless login. If this is not possible, a Google term would be "ssh keyless login".

Congratulations! You have completed what is in some cases an convoluted and unintuitive setup, but with some luck the MetaPathways pipeline ready for action. Now that you have come so far you will likely want to use it. You can now proceed to obtain some .fasta files full of sample sequences and let the analysis commence. Its use is simple if you are familiar with the Unix command line, however, we have provided some basic examples and use cases.

8. MetaPathways Use and Setup

IMAGE ALT TEXT HERE

Running MetaPathways

  1. Setting Parameters - Preparing for your MetaPathways run

Before we start our first run of the pipeline we will again take a look at the parameters contained in template_param.txt. This file gives all the instructions and settings to be run for each step of the pipeline. Many of the default settings found in template_param.txt are general and should be adequate for many metagenomic analyses. However, often one will have to remember to change these to reflect the questions and goals one has about their specific dataset.

Settings in this file are in the form of parameter/value separated by spaces; multiple values are separated by commas:

parameter value
parameter value1,value2,...

INPUT: format - specifies the type of input file. Possible values include: fasta, gbk-annotated, and gbk-unannotated. Annotated and unannotated correspond to the existing gene annotations contained within the or GenBank (gbk) input files.

QC parameters

quality control:min_length - specifies the minimum number of nucleotides a sequence must have during the QC phase

quality_control:delete_replicates - removes duplicate sequences from input

ORF prediction parameters

orf prediction:algorithm - specifies the ORF prediction algorithm that is used. Currently only prodigal is available

orf_prediction:min_length - specifies the minimum number of amino acids in a predicted ORF

Annotation parameters

annotation:algorithm - specifies which homology search algorithm to use for ORF annotation. Current options are blast and last are more-efficient implementation of the seed-and-extend approximation algorithm

annotation:dbs - specifies which protein databases and in what order they will be used for annotation. Database names are separated by commas, and the names must exactly match the naming convention in the database folder blastDB/

annotation:min_bsr - specifies the minimum blast-score ratio threshold. Only hits greater than the threshold will be kept.

annotation:max_evalue - specifies the maximum e-value threshold. Only e-values smaller (more statistically significant) than this threshold will be kept.

annotation:min_score - specifies the minimum bit-score threshold. Only hits greater than this score will be kept.

annotation:min_length - specifies the minimum length threshold. Only annotations with a greater length will be kept.

annotation:max_length - specifies the maximum number of annotations to be kept for each search. Usually the top-5 or top-10 homology hits are sufficient for most purposes.

RNA parameters Analogous to the protein homology search settings above:

rRNA:refdbs - specifies the databases to be searched against. These database names must match the names of the nucleotide BLAST databases found in the blastDB/ folder specified in pipeline configuration file

rRNA:max_evalue - sets the 16s rRNA maximum expect value threshold. Only hits less than (more statistically significant) than this threshold will be kept

rRNA:min_identity - sets the minimum percent identity threshold. Only annota- tions with a greater percent identity with the query sequence will be kept

rRNA:min_bitscore - only annotations with bit-scores greater than this minimum threshold will be kept.

Grid Settings. Settings associated with running protein homology searches on the grid:

grid engine:batch size - specifies the number of sequences to be included in each grid job. This should be set to respect the memory and cpu time requirements of the grid you are using

grid_engine:max_concurrent_batches - sets the maximum number of jobs to be submitted to a grid at one time. MetaPathways will maintain a job queue of this size waiting to be scheduled

grid_engine:walltime - sets the maximum amount of time an individual job can take. Setting this value too high affects your scheduling by the SunGrid scheduler. Setting it too low allows you to be schedule but your job will be stopped before completion.

grid_engine:RAM - the maximum ram usage for the job. Also can affect the schedul- ing of your jobs. Becomes an issue for larger databases such as RefSeq

grid_engine:user - username used to access the grid via ssh grid engine:server — the address of the compute grid via ssh

Pathway Tools parameters

ptools_settings:taxonomic_pruning - specifies if the ePGDB in Pathway Tools should be built with taxonomic pruning enabled (yes) or disabled (no). Disabled is recommended for metagenomic samples. Single-cell analyses may want to consider enabling it.

  1. Pipeline Execution Flags - yes, skip, stop, redo:

For each step of the pipeline one must specify one of the following actions:

  • yes - perform the operation with the above settings
  • skip - do not perform this operation (note that this could cause later dependent steps in the pipeline to fail)
  • stop - stop the pipeline run after completing the previous step
  • redo - recompute a specific step of the pipeline (after incomplete execution or error may have corrupted the output)
  • grid - perform this step on the grid (currently only available for BLAST/LAST annotation step
  1. Starting a Run — The MetaPathways pipeline is run using the MetaPathways.py script from the command line:
$ ./MetaPathways.py -i [input file/folder] -o [output directory] -c [config file] -p [parameter file] -r [overwrite/overlay]

e.g.

 $ ./MetaPathways.py -i testdata/ -o ~/MetaPathways/output -c ~/MetaPathways/template_config.txt -p ~/MetaPathways/template_param.txt -r overlay -v

where,

  • -i specifies the input file directory or specific .fasta file * -o specifies the output directory
  • -c the configuration file to be used for this run -p the parameter file to be used for this run
  • -r the run-style to be use for this run:
  • overlay check for existing run in place and uses existing files as it finds them except if the pipeline step is set to redo
  • overwrite overwrites existing output
  • -v verbose output displays the exact commands being run for each step

The script testMetaPathways.sh will do a simple run on sequences in the testdata/ folder:

$ ./testMetaPathways.sh