Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2019-ACL-Context-aware Embedding for Targeted Aspect-based Sentiment Analysis #20

Open
farinamhz opened this issue Jan 11, 2023 · 2 comments
Assignees
Labels
literature-review Summary of the paper related to the work

Comments

@farinamhz
Copy link
Member

Context-aware Embedding for Targeted Aspect-based Sentiment Analysis

This issue is for the summary of the paper above.

@farinamhz farinamhz added the literature-review Summary of the paper related to the work label Jan 11, 2023
@farinamhz farinamhz self-assigned this Jan 11, 2023
@farinamhz
Copy link
Member Author

Main problem

This paper's main problem is the automation of the analysis of customers' reviews and understanding the reviewers' attitudes to different aspects of a product, such as "price," "service," or "safety." The authors propose a novel embedding refinement method to obtain context-aware embeddings for Targeted aspect-based sentiment analysis (TABSA). This is because of a need for context awareness in previous works, which led to having the same embeddings for words even when the context changes.

Existing work

Attention-based neural networks have demonstrated remarkable progress in the TABSA task, but the authors of the current paper note that the existing approaches usually utilize context-independent or randomly initialized vectors for representing targets and aspects. Therefore, the semantic information is lost, and the interdependence among specific targets, corresponding aspects, and context, is not considered.

Inputs

  • A collection of sentences

Outputs

  • Aspects and Sentiments

Example

The goal of TABSA is that, given an input sentence, we want to extract the sentiment of the aspect that belongs to a target.
For example, "location1 is your best bet for secure although expensive and location2 is too far."
Target: location1, Aspect: SAFETY, Sentiment: Positive
Target: location1, Aspect: PRICE, Sentiment: Negative
Target: location2, Aspect: TRANSIT, Sentiment: Negative

Proposed Method

They present a novel embedding refinement approach to obtain context-aware embeddings for the TABSA task rather than context-independent or randomly initialized embeddings:

  • A sparse coefficient vector is leveraged to select highly correlated words from the sentence.
  • The representations of target and aspect are adjusted to make these highly-correlated words more valuable.
  • The aspect embedding is fine-tuned to be closer to the highly correlated target and further away from the irrelevant targets.

The model framework has the following steps, which are provided as a schema in the figure after the steps:

  1. Sentence embedding matrix X is fed into the fully connected layer and step function to create sparse coefficient vector u'.
  2. The hidden output of u' is used to refine the target and aspect embeddings.
  3. Compute the squared Euclidean function and train the model to minimize the distance to obtain the final refined embeddings for the target and aspect.

image

Experimental Setup

Dataset

  • SentiHood: annotated sentences containing one or two location target mentions.
  • Task 12 of Semeval 2015": removing sentences containing no targets or NULL targets.

They use Glove to initialize the word embeddings in experiments.

Evaluation and Metrics

They use the metrics below:

  • Macro average F1, Strict accuracy (Acc.), and AUC for aspect detection
  • Acc. and AUC for sentiment classification

Baselines

  • LSTM-Final. Bidirectional LSTM that only uses the final hidden states
  • LSTM-Loc. Bidirectional LSTM that uses the hidden states where the location target is.
  • SenticLSTM. Bidirectional LSTM that uses external knowledge.
  • Delayed-Memory. Delayed memory mechanism

Results

The experimental results show that incorporating context-aware embeddings of targets and aspects into the neural models improves:

  • Aspect detection by 2.9% in strict accuracy
  • Sentiment classification by 1.8% in strict accuracy

Code

https://github.com/BinLiang-NLP/CAER-TABSA (Official)

Presentation

No presentation was provided.

Criticism

Not considering latent or implicit aspects.

@hosseinfani
Copy link
Member

@farinamhz nice summary. thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
literature-review Summary of the paper related to the work
Projects
None yet
Development

No branches or pull requests

2 participants