Skip to content
codes accompanying ACL 2019 paper Quantifying the Similarity between Relations with Fact Distributions
Python Shell
Branch: master
Clone or download
Latest commit 6d32cb0 Sep 3, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
data/wikipedia compress the pretrained entity embedding file Sep 3, 2019
models update Jul 19, 2019
tacred Create Jul 21, 2019
util update Jul 19, 2019
LICENSE Initial commit May 23, 2019 update Sep 3, 2019 update Jul 19, 2019
requirements.txt update Jul 19, 2019 update Jul 19, 2019

Relation Similarity

Using fact distribution to quantify the similarity between relations in knowledge bases and real world.

If you use the code, pleace cite the following paper:

  title={Quantifying Similarity between Relations with Fact Distribution},
  author={Chen, Weize and Zhu, Hao and Han, Xu and Liu, Zhiyuan and Sun, Maosong},
  booktitle={Proceedings of the 57th Conference of the Association for Computational Linguistics},


  • Python 3 (tested on 3.6.7)
  • PyTorch (tested on 1.0.0)
  • Numpy (tested on 1.16.0)
  • Tqdm (tested on 4.30.0)
  • TensorboardX (tested on 1.6)


pip install -r requirements.txt


You can use the following code to train a model that is capable of modeling fact distribution.

python --input $Directory_to_your_own_dataset --output ./checkpoint -ent_pretrain -rel_pretrain 

If you want to train the model on the provided wikipedia dataset, you have to decompress the pretrained entity embedding file first.

Note that if you choose to add "-ent_pretrain" and "-rel_pretrain", ensure that you have pretrained embedding file "entity2vec.vec" and "relation2vec.vec" in your input directory. In our paper, the two pretrained embedding files are produced by running TransE on the dataset. We use the TransE implementation in OpenKE.

Yield relation similarity

You can use the following code to yield similarity between relations after training the model.

python --model_path ./checkpoint --input ./data/wikipedia/ --output ./result

Two files will be produced, "kl_prob.txt" and "kl_prob.json". They are the same except the file format. The i-th line contains the KL divergence between i-th relation and other relations.


All the default hyper-parameters are the ones used in the paper.

Relation prediction and relation extraction

We perform relation prediction using the implementation of You can run "" to reproduce the result.

As for relation extraction, we put the code in "tacred" directory. Due to copyright issues, we did not publish the dataset.

You can’t perform that action at this time.