GAME — a global-local graph convolutional network for multi-relation extraction that jointly models sentence-level global structure and entity-level local context.
Harry Cheng1, Lizi Liao2, Linmei Hu3, Liqiang Nie1*
1 Shandong University 2 Singapore Management University, 3 Beijing University of Posts and Telecommunication * Corresponding author
- Paper: IEEE Xplore
- Dataset (NYT train): Google Drive
- Dataset (NYT test): Google Drive
- Updates
- Introduction
- Highlights
- Method / Framework
- Project Structure
- Installation
- Dataset / Benchmark
- Usage
- TODO
- Citation
- Acknowledgement
- License
- [04/2026] Transfer repos to iLearn-Lab
- [2022] Paper published in IEEE Transactions on Big Data, Vol. 8, No. 6, pp. 1716–1728
This repository is the official implementation of Multi-Relation Extraction via A Global-Local Graph Convolutional Network, published in IEEE Transactions on Big Data.
Relation extraction identifies semantic relations between entity pairs in text. Many sentences contain multiple overlapping entity pairs and relations simultaneously — a challenge that sequential models handle poorly.
GAME (Global-local grAph convolutional network for Multi-relation Extraction) addresses this with two GCN layers of different structures:
- A global layer capturing sentence-level syntactic dependencies
- A local layer capturing entity-centric relational context
These complementary representations are combined for joint multi-relation classification.
- Jointly models global (sentence-level) and local (entity-level) graph structure
- Handles overlapping triplets within the same sentence
- Evaluated on NYT (public) and TACRED benchmarks
- GCN layers implemented via both pure GCN and graph attention network (GAT) for ablation
GAME constructs a dependency-based graph over sentence tokens and augments it with entity-centric local edges. Two GCN layers with different structures encode complementary global and local representations. The resulting entity embeddings are fed into a relation classifier for multi-relation prediction.
.
├── GCN_RE/ # Core GCN-based relation extraction module
│ └── utils/ # Preprocessing and data utilities
├── Spacy_preprocess.py # Spacy-based syntactic preprocessing
├── eval.py # Evaluation script
├── test_dataset.py # Dataset testing utility
├── train.py # Training script
└── README.md
Data should be placed in:
data/
└── nyt/
├── train.json
└── test.json
git clone https://github.com/iLearn-Lab/IEEETBD22-GAME.git
cd IEEETBD22-GAMEpython -m venv .venv
source .venv/bin/activate # Linux / Mac
# .venv\Scripts\activate # Windowspip install -U pip setuptools wheel
pip install -U spacy
python -m spacy download en_core_web_mdFor GPU-accelerated Spacy, see the official installation guide.
- NYT train.json: Google Drive
- NYT test.json: Google Drive
Place data files in ./data/nyt/ and update paths in train.py and eval.py.
TACRED requires a paid license from LDC and is not redistributed here.
{
"sentence": ["token1", "token2", "..."],
"subj_start": 27,
"subj_end": 28,
"obj_start": 10,
"obj_end": 11,
"relation": "/people/person/children",
"edges": [[2, 0], [2, 1], "..."]
}Sentences with multiple relations appear multiple times (one entry per relation). To customize preprocessing, modify files in GCN_RE/utils/.
python train.pypython eval.pyUpdate data and model paths in the respective scripts before running.
If you find our work useful, please cite:
@article{cheng2022game,
title = {Multi-Relation Extraction via A Global-Local Graph Convolutional Network},
journal = {IEEE Transactions on Big Data},
volume = {8},
number = {6},
pages = {1716--1728},
year = {2022},
}- Thanks to the creators of the NYT and TACRED datasets.
- Thanks to the Spacy team for their excellent NLP library.
This project is released under the Apache License 2.0.