Skip to content

Gary-code/ADWE-CNN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CLE4ATE

[Context-Aware Dynamic Word Embeddings For Aspect Term Extraction](submitted to IEEE Transactions on Affective Computing and Affective Language Resources).

Data

[Laptop] [Restaurant 16]:

Requirements

  • pytorch=1.3.1
  • python=3.7.5
  • transformers=2.3.0
  • dgl=0.5

Steps to Run Code

  • Step 1:

Download official datasets and official evaluation scripts. We assume the following file names. SemEval 2014 Laptop (http://alt.qcri.org/semeval2014/task4/):

semeval/Laptops_Test_Data_PhaseA.xml
semevalLaptops_Test_Gold.xml
semeval/eval.jar

SemEval 2016 Restaurant (http://alt.qcri.org/semeval2016/task5/)

semeval/EN_REST_SB1_TEST.xml.A
semeval/EN_REST_SB1_TEST.xml.gold
semeval/A.jar

pre-trained embedding [data]

  • Step 2:

Train:

python train_laptop.py 
python train_res.py
  • Step 3:

Evaluate:

python evaluation_laptop.py [checkpoints]

python evaluation_res.py [checkpoints]

Baselines

Kindly note that for reproducing the results of the following baselines, we use anaconda to create a new environment for each paper following the corresponding readme of their codes.

  1. DE-CNN [paper] [code] [checkpoints]
  2. Seq4Seq [paper] [code] [checkpoints]
  3. MT-TSMSA [paper] [code] [checkpoints]
  4. CL-BERT [paper] [code] [checkpoints]

Beside, we also modify the CL-BERT model, i.e., we add domain embedding to the representation of words. The code is in the [CL-BERT-new]

Step 1: Download datasets and pre-trained model weight from [code], and place these pre-trained model weight files as:

bert-pt/bert-laptop/ bert-pt/bert-rest/

Step 2: Train and evaluate:

python main.py

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%