Precision-Recall based assessment for CAFA
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Precision-Recall Assessment of Protein Function Prediction


Critical Assessment of Function Annotation (CAFA), is a community-wide challenge designed to provide a large-scale assessment of computational methods dedicated to predicting protein function.

More information can be found at as well as the CAFA2 paper (Jiang et al, 2016)

This toolset provides an assessment for CAFA submissions based on precision and recall.

For bug reports, comments or questions, please email nzhou[AT]


$ sudo apt install python-biopython python-yaml python-matplotlib python-seaborn

Main Functions

We provide two main functions to assist in the evaluation of GO-term prediction within the scope of CAFA, the main assessment function and the plot function.

    • Only input needed is the configuration file config.yaml, where the following four parameters are specified in the first section assess.
    • First parameter file: prediction file formatted according to CAFA3 formats
    • Second parameter obo: path of the gene ontology obo file. The latest version can be downloaded here. Note that the obo file used here should not be older than the one used in the prediction.
    • Third parameter benchmark: directory of the benchmark folder. Specific formats are required for the benchmark folder, including two sub-directories: groundtruth and lists. Please refer to auxiliary function for the creation of this folder, as well as the genral creation of benchmarks. An example benchmark folder is given in this repository ./precrec/benchmark
    • Fourth parameter results: Folder where results are saved. A pr_rc folder will be created within the results folder.
    • Note that only the first section assess of the configuration file is used here, the rest of the configuration file can be ignored for this function
    • Only input needed is the configuration file config.yaml, where the following parameters are specified in the second section plot.
    • First parameter results: the results from the function.
    • Second parameter title: title of the plot. Optional.
    • Third parameter smooth: whether the precision-recall curves should be smoothed. Input 'Y' or 'N'.
    • Fourth parameter(s) fileN: name of the result file to be plotted. Can add up to 12 files. These results will be drawn on the same plot.
    • Example: if the prediction file is ZZZ_1_9606.txt, the result file in the results folder will be ZZZ_1_9606_results.txt. Only input ZZZ_1_9606 in the above parameter for plotting.

Auxiliary Functions

CAFA3 released its protein targets in September 2016. Each protein target has a unique CAFA3 ID. To run the above assessment function, each protein should be represented by its CAFA3 ID. However, the benchmark proteins generated by the benchmark creation tool are identified by UniProt Accession IDs. Therefore, we here provide functions to convert between UniProt IDs and CAFA3 IDs. We also provide a function that converts benchmark files generated by the benchmark creation tool to a benchmark folder that can feed into this program.


    • Refer to python -h for syntax of using this function by itself.
    • If using our benchmark creation tool, then the file is a good example of how to generate a benchmark folder for from the raw benchmarks.
    • Input your own folder names and different gaf file names in the blanks left in
  • ./ID_conversion/

    • Two functions are written in this python script, one converts UniProt Accessions to CAFA3 IDs, the other function converts the other way around.
    • First function uniprotac_to_cafaid(taxon, uniprotacs).
    • Second function cafaid_to_uniprot(taxon, cafaids).
    • Refer to comments in the script ./ID_conversion/ and third example below for usage.


  • ./ config.yaml
  • ./ config.yaml
  • ./ID_conversion/ ./ID_conversion/example_uniprot_accession_8355.txt 8355 ./ID_conversion/example_output.txt
  • ./


Jiang, Yuxiang, et al. "An expanded evaluation of protein function prediction methods shows an improvement in accuracy." Genome biology 17.1 (2016): 184.