Skip to content
A computational library for learning and evaluating biological knowledge graph embeddings
Jupyter Notebook Python
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.


Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Bump version: 0.0.14 → 0.0.15-dev Dec 11, 2019
notebooks Update README.rst Jan 31, 2019
src/biokeen Bump version: 0.0.14 → 0.0.15-dev Dec 11, 2019
tests Fix import orders Apr 17, 2019
.bumpversion.cfg Bump version: 0.0.14 → 0.0.15-dev Dec 11, 2019
.flake8 Pass flake8 Jan 13, 2019
.gitignore Add local data to gitignore Apr 14, 2019
.readthedocs.yml Fix documentation build Jan 30, 2019
.travis.yml Update .travis.yml Jan 20, 2019
CONTRIBUTING.rst Update CONTRIBUTING.rst Apr 14, 2019
LICENSE Initial commit Sep 25, 2018 Add information about contribution to project Apr 12, 2019
README.rst Update README.rst Dec 11, 2019
requirements-rtd.txt Pin old PyBEL requirements (#29) Dec 11, 2019
setup.cfg Bump version: 0.0.14 → 0.0.15-dev Dec 11, 2019 Add built data Sep 25, 2018


BioKEEN build Coverage Status on CodeCov Documentation Status zenodo

BioKEEN (Biological KnowlEdge EmbeddiNgs) is a package for training and evaluating biological knowledge graph embeddings built on PyKEEN.

Because we use PyKEEN as the underlying software package, implementations of 10 knowledge graph embedding models are currently available for BioKEEN. Furthermore, BioKEEN can be run in training mode in which users provide their own set of hyper-parameter values, or in hyper-parameter optimization mode to find suitable hyper-parameter values from set of user defined values.

Through the integration of the Bio2BEL [2] software numerous biomedical databases are directly accessible within BioKEEN.

BioKEEN can also be run without having experience in programing by using its interactive command line interface that can be started with the command “biokeen” from a terminal.

Share Your Experimental Artifacts

You can share you trained KGE models along the other experimental artifacts through the KEEN-Model-Zoo.


A brief tutorial on how to get started with BioKEEN is available here.

Further tutorials are can be found in the notebooks directory and in our documentation.


If you find BioKEEN useful in your work, please consider citing:

[1]Ali, M., et al. (2019). BioKEEN: A library for learning and evaluating biological knowledge graph embeddings. Bioinformatics, btz117.

Note: ComPath has been updated, for this reason we have uploaded the dataset version that we have used for our experiments: dataset

Installation Current version on PyPI Stable Supported Python Versions MIT License

To install biokeen, Python 3.6+ is required, and we recommend to install it on Linux or Mac OS systems. Please run following command:

$ pip install git+

Alternatively, it can be installed from the source for development with:

$ git clone biokeen
$ cd biokeen
$ pip install -e .


Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.rst for more information on getting involved.

CLI Usage

To show BioKEEN's available commands, please run following command:


Starting the Training/HPO Pipeline - Set Up Your Experiment within 60 seconds

To configure an experiment via the CLI, please run following command:

biokeen start

To start BioKEEN with an existing configuration file, please run the following command:

biokeen start -f /path/to/config.json

Starting the Prediction Pipeline

To make prediction based on a trained model, please run following command:

biokeen predict -m /path/to/model/directory -d /path/to/data/directory

where the value for the argument -m is the directory containing the model, in more detail following files must be contained in the directory:

  • configuration.json
  • entities_to_embeddings.json
  • relations_to_embeddings.json
  • trained_model.pkl

These files are created automatically created after model is trained (and evaluated) and exported in your specified output directory.

The value for the argument -d is the directory containing the data for which inference should be applied, and it needs to contain following files:

  • entities.tsv
  • relations.tsv

where entities.tsv contains all entities of interest, and relations.tsv all relations. Both files should contain should contain a single column containing all the entities/relations. Based on these files, PyKEEN will create all triple permutations, and computes the predictions for them, and saves them in data directory in predictions.tsv.

Summarize the Results of All Experiments

To summarize the results of all experiments, please run following command:

biokeen summarize -d /path/to/experiments/directory -o /path/to/output/file.csv

Getting Bio2BEL Data

To download and structure the data from a Bio2BEL repository, run:

biokeen data get <name>

Where <name> can be any repository name in Bio2BEL such as hippie, mirtarbase.


[2]Hoyt, C., et al. (2019). Integration of Structured Biological Data Sources using Biological Expression Language. bioRxiv, 631812.
You can’t perform that action at this time.