Skip to content

Latest commit

 

History

History
31 lines (23 loc) · 1.47 KB

README.md

File metadata and controls

31 lines (23 loc) · 1.47 KB

PACIFIC: Towards Proactive Conversational Question Answering over Tabular and Textual Data in Finance

PACIFIC (ProActive Conversational Question Answering in FInanCe) contains 2,757 dialogues associated with 1,9008 QA turns.

You can download our PACIFIC dataset via PACIFIC dataset.

For more information, please refer to our PACIFIC website or read our EMNLP 2022 paper PDF.

UniPCQA Model

Training & Testing

python train.py --do_train --do_eval --max_seq_length=1280 --max_target_length=128 --gpu=<your_gpu_ids> --overwrite_output_dir --per_gpu_train_batch_size=<your_batch_size> --per_gpu_eval_batch_size=<your_batch_size> --model_name_or_path="Salesforce/codet5-base" --data_name='pacific' --model_name='codet5'

Please kindly cite our work if you use our dataset or codes, thank you.

@inproceedings{emnlp22-pacific,
  author    = {Yang Deng and
               Wenqiang Lei and
               Wenxuan Zhang and
               Wai Lam and
               Tat{-}Seng Chua},
  title     = {{PACIFIC:} Towards Proactive Conversational Question Answering over
               Tabular and Textual Data in Finance},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural
               Language Processing, {EMNLP} 2022},
  year      = {2022},
}