Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

build codecov Checked with mypy GitHub

The corresponding code for our paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. Results on SentEval are presented below (as averaged scores on the downstream and probing task test sets), along with existing state-of-the-art methods.

Model Requires labelled data? Parameters Embed. dim. Downstream (-SNLI) Probing Δ
InferSent V2 Yes 38M 4096 76.00 72.58 -3.10
Universal Sentence Encoder Yes 147M 512 78.89 66.70 -0.21
Sentence Transformers ("roberta-base-nli-mean-tokens") Yes 125M 768 77.19 63.22 -1.91
Transformer-small (DistilRoBERTa-base) No 82M 768 72.58 74.57 -6.52
Transformer-base (RoBERTa-base) No 125M 768 72.70 74.19 -6.40
DeCLUTR-small (DistilRoBERTa-base) No 82M 768 77.50 74.71 -1.60
DeCLUTR-base (RoBERTa-base) No 125M 768 79.10 74.65 --

Transformer-* is the same underlying architecture and pretrained weights as DeCLUTR-* before continued pretraining with our contrastive objective. Transformer-* and DeCLUTR-* use mean pooling on their token-level embeddings to produce a fixed-length sentence representation. Downstream scores are computed without considering perfomance on SNLI (denoted "Downstream (-SNLI)") as InferSent, USE and Sentence Transformers all train on SNLI. Δ: difference to DeCLUTR-base downstream score.

Table of contents


The easiest way to get started is to follow along with one of our notebooks:

  • Training your own model Open In Colab
  • Embedding text with a pretrained model Open In Colab
  • Evaluating a model with SentEval Open In Colab


This repository requires Python 3.6.1 or later.

Setting up a virtual environment

Before installing, you should create and activate a Python virtual environment. See here for detailed instructions.

Installing the library and dependencies

If you don't plan on modifying the source code, install from git using pip

pip install git+

Otherwise, clone the repository locally and then install

git clone
pip install --editable .


  • If you plan on training your own model, you should also install PyTorch with CUDA support by following the instructions for your system here.


Preparing a dataset

A dataset is simply a file containing one item of text (a document, a scientific paper, etc.) per line. For demonstration purposes, we have provided a script that will download the WikiText-103 dataset and match our minimal preprocessing

python scripts/ path/to/output/wikitext-103/train.txt --min-length 2048

See scripts/ for a script that can be used to recreate the (much larger) dataset used in our paper.

You can specify the train set path in the configs under "train_data_path".


  • A training dataset should contain documents with a minimum of num_anchors * max_span_len * 2 whitespace tokens. This is required to sample spans according to our sampling procedure. See the dataset reader and/or our paper for more details on these hyperparameters.


To train the model, use the allennlp train command with our declutr.jsonnet config. For example, to train DeCLUTR-small, run the following

# This can be (almost) any model from that supports masked language modelling.

allennlp train "training_config/declutr.jsonnet" \
    --serialization-dir "output" \
    --overrides "{'train_data_path': 'path/to/your/dataset/train.txt'}" \
    --include-package "declutr"

The --overrides flag allows you to override any field in the config with a JSON-formatted string, but you can equivalently update the config itself if you prefer. During training, models, vocabulary, configuration, and log files will be saved to the directory provided by --serialization-dir. This can be changed to any directory you like.


  • There was a small bug in the original implementation that caused gradients derived from the contrastive loss to be scaled by 1/N, where N is the number of GPUs used during training. This has been fixed. To reproduce results from the paper, set model.scale_fix to False in your config. Note that this will have no effect if you are not using distributed training with more than 1 GPU.

Exporting a trained model to HuggingFace Transformers

We have provided a simple script to export a trained model so that it can be loaded with Hugging Face Transformers

wget -nc
python --archive-file "output" --save-directory "output_transformers"

The model, saved to --save-directory, can then be loaded using the Hugging Face Transformers library (see Embedding for more details)

from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("output_transformers")
model = AutoModel.from_pretrained("output_transformers")

If you would like to upload your model to the Hugging Face model repository, follow the instructions here.

Multi-GPU training

To train on more than one GPU, provide a list of CUDA devices in your call to allennlp train. For example, to train with four CUDA devices with IDs 0, 1, 2, 3

--overrides "{'distributed.cuda_devices': [0, 1, 2, 3]}"

Training with mixed-precision

If your GPU supports it, mixed-precision will be used automatically during training and inference.


You can embed text with a trained model in one of four ways:

  1. Sentence Transformers: load our pretrained models with the SentenceTransformers library (recommended).
  2. Hugging Face Transformers: load our pretrained models with the Hugging Face Transformers library.
  3. From this repo: import and initialize an object from this repo which can be used to embed sentences/paragraphs.
  4. Bulk embed: embed all text in a given text file with a simple command-line interface.

The following pre-trained models are available:


Our pretrained models are hosted with Hugging Face Transformers, so they can easily be loaded in SentenceTransformers. Just make sure to install the SentenceTransformers library first. Here is a simple example

from sentence_transformers import SentenceTransformer

# Load the model
model = SentenceTransformer("johngiorgi/declutr-small")

# Prepare some text to embed
texts = [
    "A smiling costumed woman is holding an umbrella.",
    "A happy woman in a fairy costume holds an umbrella.",

# Embed the text
embeddings = model.encode(texts)

These embeddings can then be used, for example, to compute the semantic similarity between some number of sentences or paragraphs

from scipy.spatial.distance import cosine

semantic_sim = 1 - cosine(embeddings[0], embeddings[1])

Hugging Face Transformers

Alternatively, you can use the models straight from Hugging Face Transformers. This just requires a few extra steps. Here is a simple example

import torch
from transformers import AutoModel, AutoTokenizer

# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small")
model = AutoModel.from_pretrained("johngiorgi/declutr-small")

# Prepare some text to embed
texts = [
    "A smiling costumed woman is holding an umbrella.",
    "A happy woman in a fairy costume holds an umbrella.",
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")

# Embed the text
with torch.no_grad():
    sequence_output = model(**inputs)[0]

# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
    sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)

From this repo

To use the model directly from this repo, import Encoder and pass it some text (it accepts both strings and lists of strings)

from declutr import Encoder

# This can be a path on disk to a model you have trained yourself OR
# the name of one of our pretrained models.
pretrained_model_or_path = "declutr-small"

encoder = Encoder(pretrained_model_or_path)
embeddings = encoder([
    "A smiling costumed woman is holding an umbrella.",
    "A happy woman in a fairy costume holds an umbrella."

See the list of available PRETRAINED_MODELS in declutr/

python -c "from declutr.encoder import PRETRAINED_MODELS ; print(list(PRETRAINED_MODELS.keys()))"

Bulk embed a file

To embed all text in a given file with a trained model, run the following command

allennlp predict "output" "path/to/input.txt" \
 --output-file "output/embeddings.jsonl" \
 --batch-size 32 \
 --cuda-device 0 \
 --use-dataset-reader \
 --overrides "{'dataset_reader.num_anchors': null}" \
 --include-package "declutr"

This will:

  1. Load the model serialized to "output" with the "best" weights (i.e. the ones that achieved the lowest loss during training).
  2. Use that model to embed the text in the provided input file ("path/to/input.txt").
  3. Save the embeddings to disk as a JSON lines file ("output/embeddings.jsonl")

The text embeddings are stored in the field "embeddings" in "output/embeddings.jsonl".

Evaluating with SentEval

SentEval is a library for evaluating the quality of sentence embeddings. We provide a script to evaluate our model against SentEval. We have provided a notebook that documents the process of evaluating a trained model on SentEval. Broadly, the steps are the following:

First, clone the SentEval repository and download the transfer task datasets (you only need to do this once)

# Clone our fork which has several bug fixes merged
git clone
cd SentEval/data/downstream/
cd ../../../

See the SentEval repository for full details.

Then you can run our script to evaluate a trained model against SentEval

python scripts/ allennlp "SentEval" "output"
 --output-filepath "output/senteval_results.json" \
 --cuda-device 0  \
 --include-package "declutr"

The results will be saved to "output/senteval_results.json". This can be changed to any path you like.

Pass the flag --prototyping-config to get a proxy of the results while dramatically reducing computation time.

For a list of commands, run

python scripts/ --help

For help with a specific command, e.g. allennlp, run

python scripts/ allennlp --help

Reproducing results

To reproduce results from the paper, first follow the instructions to set up SentEval in Evaluating with SentEval. Then, run

python scripts/ transformers "SentEval" "johngiorgi/declutr-base" \
	--output-filepath "senteval_results.json" \
	--cuda-device 0 \

"johngiorgi/declutr-base" can be replaced with (almost) any model on the HuggingFace model hub. Evaluation takes approximately 10-12 hours on a NVIDIA V100 Tesla GPU.


If you use DeCLUTR in your work, please consider citing our paper

    title = "{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations",
    author = "Giorgi, John  and
      Nitski, Osvald  and
      Wang, Bo  and
      Bader, Gary",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    doi = "10.18653/v1/2021.acl-long.72",
    pages = "879--895",
    abstract = "Sentence embeddings are an important component of many natural language processing (NLP) systems. Like word embeddings, sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks, such as clustering and retrieval. Unlike word embeddings, the highest performing solutions for learning sentence embeddings require labelled data, limiting their usefulness to languages and domains where labelled data is abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations. Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning universal sentence embeddings that does not require labelled training data. When used to extend the pretraining of transformer-based language models, our approach closes the performance gap between unsupervised and supervised pretraining for universal sentence encoders. Importantly, our experiments suggest that the quality of the learned embeddings scale with both the number of trainable parameters and the amount of unlabelled training data. Our code and pretrained models are publicly available and can be easily adapted to new domains or used to embed unseen text.",