Skip to content
Switch branches/tags

Medical oncept Annotation Tool

MedCAT can be used to extract information from Electronic Health Records (EHRs) and link it to biomedical ontologies like SNOMED-CT and UMLS. Paper on arXiv.



A demo application is available at MedCAT. This was trained on MIMIC-III and all of SNOMED-CT.


A guide on how to use MedCAT is available in the tutorial folder. Read more about MedCAT on Towards Data Science.

Related Projects

  • MedCATtrainer - an interface for building, improving and customising a given Named Entity Recognition and Linking (NER+L) model (MedCAT) for biomedical domain text.
  • MedCATservice - implements the MedCAT NLP application as a service behind a REST API.
  • iCAT - A docker container for CogStack/MedCAT/HuggingFace development in isolated environments.

Install using PIP (Requires Python 3.6.1+)

  1. Upgrade pip pip install --upgrade pip
  2. Install MedCAT
  • For macOS/linux: pip install --upgrade medcat
  • For Windows (see PyTorch documentation): pip install --upgrade medcat -f
  1. Get the scispacy models:

pip install

pip install

  1. Downlad the Vocabulary and CDB from the Models section bellow

  2. Quickstart:

from medcat.vocab import Vocab
from medcat.cdb import CDB
from import CAT

# Load the vocab model you downloaded
vocab = Vocab.load(vocab_path)
# Load the cdb model you downloaded
cdb = CDB.load('<path to the cdb file>') 

# Create cat - each cdb comes with a config that was used
#to train it. You can change that config in any way you want, before or after creating cat.
cat = CAT(cdb=cdb, config=cdb.config, vocab=vocab)

# Test it
text = "My simple document with kidney failure"
doc_spacy = cat(text)
# Print detected entities

# Or to get an array of entities, this will return much more information
#and usually easier to use unless you know a lot about spaCy
doc = cat.get_entities(text)

# To train on one example
_ = cat(text, do_train=True)

# To train on a iterator over documents
data_iterator = <your iterator>

#Once done, save the new CDB<save path>)

MetaCAT example

from medcat.meta_cat import MetaCAT
# Assume we have a CDB and Vocab object from before
# Download the mc_status model from the models section below and unzip it

mc_status = MetaCAT.load("<path to the unziped mc_status directory>")
cat = CAT(cdb=cdb, config=cdb.config, vocab=vocab, meta_cats=[mc_status])

# Now annotate a document, it will have the meta annotation 'status'
doc = cat.get_entities(text)


A basic trained model is made public for the vocabulary and CDB. It is trained for the ~ 35K concepts available in MedMentions.

Vocabulary Download - Built from MedMentions

CDB Download - Built from MedMentions

MetaCAT Status Download - Built from a sample from MIMIC-III, detects is an annotation Affirmed (Positve) or Other (Negated or Hypothetical)

(Note: This is was compiled from MedMentions and does not have any data from NLM as that data is not publicaly available.)


If you have access to UMLS or SNOMED-CT and can provide some proof (a screenshot of the UMLS profile page is perfect, feel free to redact all information you do not want to share), contact us - we are happy to share the pre-built CDB and Vocab for those databases.


  • Switch to spaCy version 3+
  • Enable automatic download of pre-built UMLS/SNOMED databases
  • Enable spaCy serialization of documents (problem with doc._.ents)
  • Update webapp to v1 and enable UMLS and SNOMED
  • Fix logging, make sure the config options are respected
  • Relation extraction
  • Implement replace_center in the call function for meta_cat
  • Fix parallelization for MedCAT alone + Try to solve how to run this when we have MetaCATs also
  • How to continue training after unsupervised training (without reseting annealing) - how not to overfit if we have very little annotations, but also not underfit (not learn anything)
  • Make MetaCAT config part of


Entity extraction was trained on MedMentions In total it has ~ 35K entites from UMLS

The vocabulary was compiled from Wiktionary In total ~ 800K unique words

Powered By

A big thank you goes to spaCy and Hugging Face - who made life a million times easier.


      title={Multi-domain Clinical Natural Language Processing with MedCAT: the Medical Concept Annotation Toolkit}, 
      author={Zeljko Kraljevic and Thomas Searle and Anthony Shek and Lukasz Roguski and Kawsar Noor and Daniel Bean and Aurelie Mascio and Leilei Zhu and Amos A Folarin and Angus Roberts and Rebecca Bendayan and Mark P Richardson and Robert Stewart and Anoop D Shah and Wai Keong Wong and Zina Ibrahim and James T Teo and Richard JB Dobson},