Skip to content

CoNLI: a plug-and-play framework for ungrounded hallucination detection and reduction

License

Notifications You must be signed in to change notification settings

microsoft/CoNLI_hallucination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text Hallucination Detection and Reduction: CoNLI

We are working on releasing the code and datasets

Introduction

Python implementation of our paper: Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations.

We propose a generic post-edit framework based on OpenAI GPT to effectively detect and mitigate hallucinations.

Large language models (LLMs) can generate fluent natural language texts when given relevant documents as background context. This ability has attracted considerable interest in developing industry applications of LLMs. However, LLMs are prone to generate hallucinations that are not supported by the provided sources. In this paper, we propose a hierarchical framework to detect and mitigate such ungrounded hallucination. Our framework uses Chain of Natural Language Inference (CoNLI) for hallucination detection and hallucination reduction via post-editing. Our approach achieves state-of-the-art performance on hallucination detection and enhances text quality through rewrite, using LLMs without any fine-tuning or domain-specific prompt engineering. We show that this simple plug-and-play framework can serve as an effective choice for hallucination detection and reduction, achieving competitive performance across various contexts.

Environment Requirements

We used Python==3.8.10. Please use the following command install all the required python packages:

pip install -r ./CoNLI/CoNLI/requirements.txt

Citation

If you find the repository or CoNLI helpful, please cite the following paper

@article{lei2023chain,
  title={Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations},
  author={Lei, Deren and Li, Yaxi and Hu, Mengya and Wang, Mingyu and Yun, Vincent and Ching, Emily and Kamal, Eslam and others},
  journal={arXiv preprint arXiv:2310.03951},
  year={2023}
}

About

CoNLI: a plug-and-play framework for ungrounded hallucination detection and reduction

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •