Skip to content
forked from ljynlp/W2NER

Source code for AAAI 2022 paper: Unified Named Entity Recognition as Word-Word Relation Classification

License

Notifications You must be signed in to change notification settings

dumpmemory/W2NER

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unified Named Entity Recognition as Word-Word Relation Classification

Source code for AAAI 2022 paper: Unified Named Entity Recognition as Word-Word Relation Classification

So far, named entity recognition (NER) has been involved with three major types, including flat, overlapped (aka. nested), and discontinuous NER, which have mostly been studied individually. Recently, a growing interest has been built for unified NER, tackling the above three jobs concurrently with one single model. Current best-performing methods mainly include span-based and sequence-to-sequence models, where unfortunately the former merely focus on boundary identification and the latter may suffer from exposure bias. In this work, we present a novel alternative by modeling the unified NER as word-word relation classification, namely W2NER. The architecture resolves the kernel bottleneck of unified NER by effectively modeling the neighboring relations between entity words with Next-Neighboring-Word (NNW) and Tail-Head-Word-* (THW-*) relations. Based on the W2NER scheme we develop a neural framework, in which the unified NER is modeled as a 2D grid of word pairs. We then propose multi-granularity 2D convolutions for better refining the grid representations. Finally, a co-predictor is used to sufficiently reason the word-word relations. We perform extensive experiments on 14 widely-used benchmark datasets for flat, overlapped, and discontinuous NER (8 English and 6 Chinese datasets), where our model beats all the current top-performing baselines, pushing the state-of-the-art performances of unified NER.

Label Scheme

Architecture

1. Environments

- python (3.8.12)
- cuda (11.4)

2. Dependencies

- numpy (1.21.4)
- torch (1.10.0)
- gensim (4.1.2)
- transformers (4.13.0)
- pandas (1.3.4)
- scikit-learn (1.0.1)
- prettytable (2.4.0)

3. Dataset

We provide some datasets processed in this link.

4. Preparation

  • Download dataset
  • Process them to fit the same format as the example in data/
  • Put the processed data into the directory data/

5. Training

>> python main.py --config ./config/example.json

6. License

This project is licensed under the MIT License - see the LICENSE file for details.

7. Citation

If you use this work or code, please kindly cite this paper:

@inproceedings{li2022unified,
  title={Unified named entity recognition as word-word relation classification},
  author={Li, Jingye and Fei, Hao and Liu, Jiang and Wu, Shengqiong and Zhang, Meishan and Teng, Chong and Ji, Donghong and Li, Fei},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={36},
  number={10},
  pages={10965--10973},
  year={2022}
}

About

Source code for AAAI 2022 paper: Unified Named Entity Recognition as Word-Word Relation Classification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%