Skip to content

mcao516/EntFA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Inspecting the Factuality of Hallucinations in Abstractive Summarization

This directory contains code necessary to replicate the training and evaluation for the ACL 2022 paper "Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization" by Meng Cao, Yue Dong and Jackie Chi Kit Cheung.

Dependencies and Setup

The code is based on Huggingface's Transformers library.

git clone https://github.com/mcao516/EntFA.git
cd ./EntFA
pip install -r requirements.txt
python setup.py install

How to Run

Conditional masked language model (CMLM) checkpoint can be found here. For masked language model (MLM), download bart.large at Fairseq's BART repository. Download CMLM and MLM, put them in the models directory.

Train KNN Classifier

OUTPUT_DIR=knn_checkpoint
mkdir $OUTPUT_DIR

python examples/train_knn.py \
  --train-path data/train.json \
  --test-path data/test.json \
  --cmlm-model-path models \
  --data-name-or-path models/xsum-bin \
  --mlm-path models/bart.large \
  --output-dir $OUTPUT_DIR;

You can also find an example at examples/train_knn_classifier.ipynb.

Evaluation

Evalute the entity-level factuality of generated summaries. Input file format: one document/summary per line.

SOURCE_PATH=test.source
TARGET_PATH=test.hypothesis

python examples/evaluation.py \
    --source-path $SOURCE_PATH \
    --target-path $TARGET_PATH \
    --cmlm-model-path models \
    --data-name-or-path models/xsum-bin \
    --mlm-path models/bart.large \
    --knn-model-path models/knn_classifier.pkl;

Also check examples/evaluation.ipynb.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published