Skip to content

Commit

Permalink
docs(implementation): refine methodology detail
Browse files Browse the repository at this point in the history
  • Loading branch information
entelecheia committed Sep 10, 2023
1 parent 79de893 commit 685f3e3
Show file tree
Hide file tree
Showing 2 changed files with 117 additions and 40 deletions.
78 changes: 38 additions & 40 deletions book/paper/implementation/index.md
Original file line number Diff line number Diff line change
@@ -1,74 +1,68 @@
# Implementation

The task aims to implement an Aspect-Based Sentiment Analysis (ABSA) pipeline using generative language models like GPT-4. This implementation covers four crucial phases: Data Preparation, Aspect Extraction, Sentiment Analysis, and Post-Processing. Each phase has its specialized sub-tasks that include data cleaning, model fine-tuning, prediction, and output formatting. The objective is to obtain an in-depth understanding of the sentiment related to multiple aspects of a company and its owner for more nuanced reputation management.
Aspect-Based Sentiment Analysis (ABSA) has emerged as a critical tool in understanding public sentiment in granular detail. Given its applicability in various domains such as finance, healthcare, and reputation management, the need for robust ABSA frameworks is evident. This section delineates the steps for constructing an ABSA pipeline that leverages the generative pre-trained model GPT-4. The work aims to provide a detailed, step-by-step methodology that focuses on using this generative model to analyze sentiment related to distinct aspects of companies and their owners.

## Streamlined Procedure
## Phase 1: Data Preparation

### Phase 1: Data Preparation
### Data Collection

1. **Data Collection**
For a robust ABSA model, the quality of data is paramount. Web scraping algorithms target the Daum news website to harvest articles specifically discussing companies and their owners. The chosen website provides a rich corpus focusing on relevant aspects like "management," "financial health," and "corporate social responsibility," among others.

- **Technique**: Web scraping algorithms would crawl the Daum news website for articles specifically related to companies and their owners. The focus would be on aspects outlined in the prompts, such as "management" or "financial."
### Data Annotation

2. **Data Annotation**
After data acquisition, a team of domain experts annotates the text corpus assisted by a prompt-guided labeling system. Aspects such as "management" and "financials" are labeled based on pre-defined prompts. This guided annotation aims to provide a structured, standardized dataset, thereby reducing the scope of labeling errors and facilitating easier modeling downstream.

- **Procedure**: The acquired corpus would undergo a manual annotation process where experts identify and label aspects and sentiments. The aspects would be categorized as per the guideline, such as "management," "workplace," etc.
- **Prompt-Guided Labeling**: Ensure that the aspects and sentiments are labeled according to pre-defined prompts, to maintain a standardized data set.
### Data Cleaning

3. **Data Cleaning**
Text normalization techniques convert the corpus to a standard form, including lowercasing, stripping special characters, and handling whitespace. Irrelevant data points that don't align with the study objectives are purged to concentrate the dataset.

- **Text Normalization**: This involves transforming all text to a standard form—converting to lowercase and removing special characters or extra spaces.
- **Irrelevant Data Removal**: Any data points that are extraneous or not aligned with the objectives should be discarded.
### Train-Test Split

4. **Train-Test Split**
- **Stratification**: The data set would be divided into training, validation, and test subsets. Stratified sampling ensures each subset is representative of the overall data composition in terms of the aspects and sentiments.
Stratified sampling ensures each subset—training, validation, and test—is a representative microcosm of the overall dataset. This stratification balances the aspects and sentiments across all subsets, enabling a fair evaluation of model performance.

### Phase 2: Aspect Extraction (AE)
## Phase 2: Aspect Extraction

1. **Model Choice**
### Model Choice and Fine-Tuning

- **Rationale**: Generative models like GPT-4 have proven efficacy in complex language tasks, making them apt for the Aspect Extraction task.
GPT-4 is selected for its proven capabilities in complex NLP tasks. An objective function is customized to extend the model’s capabilities from text generation to aspect identification.

2. **Fine-Tuning Objective Function**
$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i | x_{<i}; \theta)
$$

- Adapt the model to predict not just the next word but also whether a word or phrase is an aspect.
### Training and Validation

$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i | x_{<i}; \theta)
$$
The model is trained on the annotated training set using Large Language Model (LLM) embeddings. These embeddings enable classifiers to understand nuanced language features that simple word embeddings might miss. Performance metrics like F1-score, precision, and recall are monitored during validation to tune the model parameters.

3. **Training**
## Phase 3: Aspect Sentiment Analysis

- **Data Input**: Use the labeled training set acquired in Phase 1.
- **LLM Embeddings**: As specified, LLM embeddings will be used to train classifiers, capitalizing on their ability to capture nuanced language features.
### Additional Fine-Tuning

4. **Validation**
- **Model Tuning**: The validation set will be used for hyperparameter tuning and to validate the model's performance metrics such as F1-score, precision, and recall.
The model undergoes a second fine-tuning stage to predict the sentiment polarity for each identified aspect. The objective function used for this phase is:

### Phase 3: Aspect Sentiment Analysis (ASA)
##

1. **Additional Fine-Tuning**
$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i, z_i | x_{<i}, y_{<i}; \theta)
$$

- Further fine-tune the model to predict sentiment polarity for each extracted aspect.
##

$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i, z_i | x_{<i}, y_{<i}; \theta)
$$
### Training and Validation

2. **Training and Validation**
Here, LLM embeddings remain instrumental in training classifiers that can accurately predict sentiment polarity.

- **LLM Embeddings**: The LLM embeddings will be instrumental again here, to train classifiers that predict sentiment polarity for the aspects identified.
## Phase 4: Post-Processing

### Phase 4: Post-Processing
### Sentiment Aggregation

1. **Sentiment Aggregation**
For recurring aspects, sentiments are statistically aggregated using methods like mode or mean to generate a single sentiment score. This aggregation aids in summarizing the general sentiment concerning specific aspects.

- **Methodology**: The identified sentiments for recurring aspects are statistically aggregated using methods like mode or mean to give a consolidated sentiment score.
### Output Formatting

2. **Output Formatting**
- **JSONlines**: The final output would be formatted into JSONlines, ensuring that it is both human-readable and machine-parseable.
Finally, the extracted insights are formatted into JSONlines format, enabling both human interpretation and machine parsing.

## Sample Output
**Sample Output**

```json
[
Expand All @@ -77,3 +71,7 @@ The task aims to implement an Aspect-Based Sentiment Analysis (ABSA) pipeline us
]
```

## Conclusion and Alternative Solutions

The paper describes an ABSA implementation using GPT-4, providing a nuanced understanding of sentiments related to different aspects of companies and their owners. Future iterations could explore the utilization of other generative models or include multilingual support for enhanced scalability. Automated annotation techniques could also be integrated to expedite the data preparation phase. By incorporating these elements, the model could offer even more comprehensive insights, further enriching reputation management tactics.
79 changes: 79 additions & 0 deletions book/paper/implementation/outline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Implementation

The task aims to implement an Aspect-Based Sentiment Analysis (ABSA) pipeline using generative language models like GPT-4. This implementation covers four crucial phases: Data Preparation, Aspect Extraction, Sentiment Analysis, and Post-Processing. Each phase has its specialized sub-tasks that include data cleaning, model fine-tuning, prediction, and output formatting. The objective is to obtain an in-depth understanding of the sentiment related to multiple aspects of a company and its owner for more nuanced reputation management.

## Streamlined Procedure

### Phase 1: Data Preparation

1. **Data Collection**

- **Technique**: Web scraping algorithms would crawl the Daum news website for articles specifically related to companies and their owners. The focus would be on aspects outlined in the prompts, such as "management" or "financial."

2. **Data Annotation**

- **Procedure**: The acquired corpus would undergo a manual annotation process where experts identify and label aspects and sentiments. The aspects would be categorized as per the guideline, such as "management," "workplace," etc.
- **Prompt-Guided Labeling**: Ensure that the aspects and sentiments are labeled according to pre-defined prompts, to maintain a standardized data set.

3. **Data Cleaning**

- **Text Normalization**: This involves transforming all text to a standard form—converting to lowercase and removing special characters or extra spaces.
- **Irrelevant Data Removal**: Any data points that are extraneous or not aligned with the objectives should be discarded.

4. **Train-Test Split**
- **Stratification**: The data set would be divided into training, validation, and test subsets. Stratified sampling ensures each subset is representative of the overall data composition in terms of the aspects and sentiments.

### Phase 2: Aspect Extraction (AE)

1. **Model Choice**

- **Rationale**: Generative models like GPT-4 have proven efficacy in complex language tasks, making them apt for the Aspect Extraction task.

2. **Fine-Tuning Objective Function**

- Adapt the model to predict not just the next word but also whether a word or phrase is an aspect.

$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i | x_{<i}; \theta)
$$

3. **Training**

- **Data Input**: Use the labeled training set acquired in Phase 1.
- **LLM Embeddings**: As specified, LLM embeddings will be used to train classifiers, capitalizing on their ability to capture nuanced language features.

4. **Validation**
- **Model Tuning**: The validation set will be used for hyperparameter tuning and to validate the model's performance metrics such as F1-score, precision, and recall.

### Phase 3: Aspect Sentiment Analysis (ASA)

1. **Additional Fine-Tuning**

- Further fine-tune the model to predict sentiment polarity for each extracted aspect.

$$
\max_{\theta} \sum_{i=1}^{N} log P(x_i, y_i, z_i | x_{<i}, y_{<i}; \theta)
$$

2. **Training and Validation**

- **LLM Embeddings**: The LLM embeddings will be instrumental again here, to train classifiers that predict sentiment polarity for the aspects identified.

### Phase 4: Post-Processing

1. **Sentiment Aggregation**

- **Methodology**: The identified sentiments for recurring aspects are statistically aggregated using methods like mode or mean to give a consolidated sentiment score.

2. **Output Formatting**
- **JSONlines**: The final output would be formatted into JSONlines, ensuring that it is both human-readable and machine-parseable.

## Sample Output

```json
[
{"company": "카카오", "aspect": "product & service", "aspect_terms": ["데이터센터 화재에 따른 먹통 사태", "데이터센터 화재에 따른 서비스 장애 피해지원 계획", "이모티콘 총 종 제공", "매출 손실 규모액에 따른 지원금 지급"], "opinion_terms": ["보상안 발표", "다양한 단체와 협의체를 구성하고 논의를 지속", "직접적인 피해가 큰 경우만 별도 과정을 거쳐 개별 지원", "사회적 책임 차원의 일괄 지원을 결정", "안정적인 서비스를 제공하겠다는 약속"], "sentiment": "positive"},
{"company": "넥슨", "aspect": "social", "aspect_terms": ["김정주 전 회장의 가상자산 계좌가 해킹", "억원어치 가상자산이 도난당한 사실"], "opinion_terms": ["사망자의 가상자산은 제도가 없어 보호받지 못한다는 지적", "논란이 일었다"], "sentiment": "negative"},
]
```

0 comments on commit 685f3e3

Please sign in to comment.