A Large-Scale Few-Shot Relation Extraction Dataset
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 3 commits ahead of thunlp:master.
Latest commit ff39e6d Nov 19, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
data dataset and paper Aug 29, 2018
fewshot_re_kit Merge branch 'master' of github.com:thunlp/FewRel Nov 12, 2018
models add __init__ Nov 12, 2018
paper dataset and paper Aug 29, 2018
.gitignore fix gitignore Nov 12, 2018
LICENSE add the license Oct 13, 2018
readme.md Merge pull request #2 from thunlp/master Nov 19, 2018
test_demo.py init Oct 13, 2018
train_demo.py hyper parameter change Nov 13, 2018


FewRel Dataset, Toolkits and Baseline Models

FewRel is a large-scale few-shot relation extraction dataset, which contains 70000 natural language sentences expressing 100 different relations. This dataset is presented in the our EMNLP 2018 paper FewRel: A Large-Scale Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation.

More info at http://zhuhao.me/fewrel.html .


If you used our data, toolkits or baseline models, please kindly cite our paper:

               title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
               author={Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong},

If you have questions about any part of the paper, submission, leaderboard, codes, data, please e-mail zhuhao15@mails.tsinghua.edu.cn.


Hao Zhu first proposed this problem and proposed the way to build the dataset and the baseline system; Ziyuan Wang built and maintained the crowdsourcing website; Yuan Yao helped download the original data and conducted preprocess; Xu Han, Hao Zhu, Pengfei Yu and Ziyun Wang implemented baselines and wrote the paper together; Zhiyuan Liu provided thoughtful advice and funds through the whole project. The order of the first four authors are determined by dice rolling.

Dataset and Word Embedding

The dataset has already be contained in the github repo. However, due to the large size, glove files (pre-trained word embeddings) are not included. Please download glove.6B.50d.json from Tsinghua Cloud or Google Drive and put it under data/ folder.


To run our baseline models, use command

python train_demo.py {MODEL_NAME}

replace {MODEL_NAME} with proto, metanet, gnn or snail.