Skip to content
/ DAPO Public

This the code for paper: Dialogue-adaptive Language Model Pre-training From Quality Estimation.

Notifications You must be signed in to change notification settings

lockon-n/DAPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Dialogue-adaptive Pre-training from Quality Estimation

This is the code for the paper Dialogue-adaptive Pre-training from Quality Estimation

1. Requirements

(Our experiment environment for reference)

Python 3.7+

Pytorch (1.0.0)

NLTK (3.4.5)

2. Datasets

2.1 Datasets for constructing pre-training corpus

DaliyDialog ---->./datasets/dialog/ijcnlp_daillydialog/

PERSONA-CHAT ---->./datasets/dialog/convai2_personachat/

Topical-Chat ---->./datasets/dialog/topicalchat/

BlendedSkillTalk ---->./datasets/dialog/blended_skill_talk/

After downloading these datasets, extract them to the corresponding directories.

2.2 Datasets for Downstream Tasks

MuTual&MuTual$^{plus}$

DailyDialog&PERSONA-CHAT(Annotated)

We slightly pre-process the datasets so that they have a uniform format. The pre-processed data can be found in ./datasets/

3. Instructions

3.1 Construct Pre-training Corpus

Get raw text of the dialogues

python ./codes/dialog_dapo/process_rawtext.py

Count the n-grams

python ./codes/dialog_dapo/countngram.py

Get n-NIDFs

python ./codes/dialog_dapo/get_nidf.py

Bulid the pre-training corpus

python ./codes/dialog_dapo/dialog_text_preeval.py

Split the pre-training corpus

python ./codes/dialog_dapo/split_rawtext.py

Move the pre-training corpus to the target directory

mv ./datasets/dialog/rawtext_dialog_score_train.csv ./datasets/dialog_eval_pretrain/rawtext_pretrain/train.csv
mv ./datasets/dialog/rawtext_dialog_score_dev.csv ./datasets/dialog_eval_pretrain/rawtext_pretrain/dev.csv

3.2 Pre-training ELECTRA with DAPO and fine-tuning on downstream tasks

We provide the scripts used for pre-training and fine-tuning.

Pre-training

sh ./codes/dialog_dapo/scripts/pretrain_myptALL_3_NIDF.sh

Fine-tuning

sh ./codes/dialog_dapo/scripts/downstream_myptALL_3_NIDF.sh

The results can be found in ./results/electraDAPO_myptALL_3_NIDF/electraDAPO_myptALL_3_NIDF_downstream_log_results.txt.

About

This the code for paper: Dialogue-adaptive Language Model Pre-training From Quality Estimation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published