Skip to content


Switch branches/tags

Latest commit

Bumps [gson]( from 2.3.1 to 2.8.9.
- [Release notes](
- [Changelog](
- [Commits](google/gson@gson-2.3.1...gson-parent-2.8.9)

- dependency-name:
  dependency-type: direct:production

Signed-off-by: dependabot[bot] <>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]>

Git stats


Failed to load latest commit information.
Latest commit message
Commit time

REL: Radboud Entity Linker

API status build

REL is a modular Entity Linking package that is provided as a Python package as well as a web API. REL has various meanings - one might first notice that it stands for relation, which is a suiting name for the problems that can be tackled with this package. Additionally, in Dutch a 'rel' means a disturbance of the public order, which is exactly what we aim to achieve with the release of this package.

REL utilizes English Wikipedia as a knowledge base and can be used for the following tasks:

  • Entity linking (EL): Given a text, the system outputs a list of mention-entity pairs, where each mention is a n-gram from text and each entity is an entity in the knowledge base.
  • Entity Disambiguation (ED): Given a text and a list of mentions, the system assigns an entity (or NIL) to each mention.

REL variants

REL comes in two variants for identifying entity mentions:

  • Case-sensitive: This setup is suitable for properly written texts (e.g., news articles) and is the default setup of the REL package. In this setup, we use the ner-fast FLAIR model, which is case-sensitive. The results reported in the REL paper are based on this model.

  • Case-insensitive: This setup is well suited for noisy texts (e.g., queries), where entity mentions can be (often) lowercased. In this setup, we use the ner-fast-with-lowercase model, which is the ner-fast FLAIR architucture trained on randomly cased and uncased text. This variant is the default setup of our API.

Below is a comparison of these two models on CoNLL-2003 NER dataset.

Model CoNLL-2003 test F1
ner-fast original 92.78
ner-fast lower-cased 58.42
ner-fast random 70.64
ner-fast-with-lowercase original 91.53
ner-fast-with-lowercase lower-cased 89.73
ner-fast-with-lowercase random 89.66

See Notes on using custom models for further information on switiching between these variants.

Calling our API

Users may access our API by using the example script below. For EL, the spans field needs to be set to an empty list. For ED, however, the spans field should consist of a list of tuples, where each tuple refers to the start position and length of a mention.

import requests

API_URL = ""
text_doc = "If you're going to try, go all the way - Charles Bukowski"

# Example EL.
el_result =, json={
    "text": text_doc,
    "spans": []

# Example ED.
ed_result =, json={
    "text": text_doc,
    "spans": [(41, 16)]

Setup package

This section describes how to deploy REL on a local machine and setup the API. If you want to do anything more than simply running our API locally, you can skip the Docker steps and continue with installation from source.

Option 1: Installation using Docker

First, download the necessary data; you need the generic files and a Wikipedia version (2014 or 2019) (see Download). Extract them anywhere, we will bind the directories to the Docker container as volumes.

./scripts/ ./data generic wiki_2019

Prebuilt images

To use our prebuilt default image, run:

docker pull informagi/rel

To run the API locally:

# Map container port 5555 to local port 5555, and use Wikipedia 2019
# Also map the generic and wiki_2019 folders to directories in Docker container
docker run \
    -p 5555:5555 \
    -v $PWD/data/:/workspace/data \
    --rm -it informagi/rel \
    python -m REL.server --bind --port 5555 /workspace/data wiki_2019

Now you can make requests to http://localhost:5555 (or another port if you use a different mapping) in the format described in the example above.

Build your own Docker image

To build the Docker image yourself, run:

# Clone the repository
git clone && cd REL
# Build the Docker image
docker build . -t informagi/rel

To run the API locally, use the same commands as mentioned in the previous section.

Option 2: Installation from source code

Run the following command in a terminal to install REL:

pip install git+

You will also need to manually download the files described in the next section.


The files used for this project can be divided into three categories. The first is a generic set of documents and embeddings that was used throughout the project. This folder includes the GloVe embeddings and the unprocessed datasets that were used to train the ED model. The second and third category are Wikipedia corpus related files, which in our case either originate from a 2014 or 2019 corpus. Alternatively, users may use their own corpus, for which we refer to the tutorials.


To promote usage of this package we developed various tutorials. If you simply want to use our API, then we refer to the section above. If you feel one is missing or unclear, then please create an issue, which is much appreciated :)! The first two tutorials are for users who simply want to use our package for EL/ED and will be using the data files that we provide. The remainder of the tutorials are optional and for users who wish to e.g. train their own Embeddings.

  1. How to get started (project folder and structure).
  2. End-to-End Entity Linking.
  3. Evaluate on GERBIL.
  4. Deploy REL for a new Wikipedia corpus:
    1. Extracting a new Wikipedia corpus and creating a p(e|m) index.
    2. Training your own Embeddings.
    3. Generating training, validation and test files.
    4. Training your own Entity Disambiguation model.
  5. Reproducing our results
  6. REL as systemd service
  7. Notes on using custom models

Efficiency of REL

We measured the efficiency of REL on a per-document basis. We ran our API with 50 documents from AIDA-B with > 200 words, which is 323 (± 105) words and 42 (± 19) mentions per document. The results are added to the table below.

Model Time MD Time ED
With GPU 0.44±0.22 0.24±0.08
Without GPU 2.41±1.24 0.18±0.09

As our package has changed overtime, we refer to one of our earlier commits for reproducing the results in the table above. To reproduce the results above, perform the following steps:

  1. Start the server. As can be seen in, we added checkpoints in our server calls to measure time taken per call.
  2. Once the server is started, run the efficiency test. Do not forget to update the base_url to specify where the data is located in the filesystem. This directory refers to where all project-related data is stored (see our tutorial on how to get started
  3. Finally, process the efficiency results.


If you are using REL, please cite the following paper:

 author =    {van Hulst, Johannes M. and Hasibi, Faegheh and Dercksen, Koen and Balog, Krisztian and de Vries, Arjen P.},
 title =     {REL: An Entity Linker Standing on the Shoulders of Giants},
 booktitle = {Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
 series =    {SIGIR '20},
 year =      {2020},
 publisher = {ACM}


If you find any bugs or experience difficulties when using REL, please create a issue on this Github page. If you have any specific questions with respect to our research with REL, please email Mick van Hulst.


Our thanks go out to the authors that open-sourced their code, enabling us to create this package that can hopefully be of service to many.


REL: Radboud Entity Linker







No releases published


No packages published