Skip to content

IMPLabUniPr/BERT-for-ABSA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Training for Aspect-Based Sentiment Analysis with BERT

Code for "Adversarial Training for Aspect-Based Sentiment Analysis with BERT".

We have used the codebase from the following paper and improved upon their results by applying adversarial training. "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis".

ABSA Tasks

We focus on two major tasks in Aspect-Based Sentiment Analysis (ABSA).

Aspect Extraction (AE): given a review sentence ("The retina display is great."), find aspects("retina display");

Aspect Sentiment Classification (ASC): given an aspect ("retina display") and a review sentence ("The retina display is great."), detect the polarity of that aspect (positive).

Running

Place laptop and restaurant post-trained BERTs into pt_model/laptop_pt and pt_model/rest_pt, respectively. The post-trained Laptop weights can be download here and restaurant here.

Execute the following command to run the model for Aspect Extraction task:

bash run_absa.sh ae laptop_pt laptop pt_ae 9 0

Here, laptop_pt is the post-trained weights for laptop, laptop is the domain, pt_ae is the fine-tuned folder in run/, 9 means run 9 times and 0 means use gpu-0.

Similarly,

bash run_absa.sh ae rest_pt rest pt_ae 9 0
bash run_absa.sh asc laptop_pt laptop pt_asc 9 0
bash run_absa.sh asc rest_pt rest pt_asc 9 0

Evaluation

Evaluation wrapper code has been written in ipython notebook eval/eval.ipynb. AE eval/evaluate_ae.py additionally needs Java JRE/JDK to be installed.

Open result.ipynb and check the results.

Citation

@misc{karimi2020adversarial,
    title={Adversarial Training for Aspect-Based Sentiment Analysis with BERT},
    author={Akbar Karimi and Leonardo Rossi and Andrea Prati and Katharina Full},
    year={2020},
    eprint={2001.11316},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

About

code for our NAACL 2019 paper: "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.5%
  • Jupyter Notebook 3.6%
  • Shell 0.9%