Skip to content

vr25/hallucination-foundation-model-survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 

Repository files navigation

Hallucination in Large Foundation Models

This repository will be updated to include all the contemporary papers related to hallucination in foundation models. We broadly categorize the papers into four major categories as follows.

Text

LLMs

  1. SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
  2. Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
  3. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
  4. Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
  5. PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions
  6. Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment
  7. How Language Model Hallucinations Can Snowball
  8. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback
  9. The Internal State of an LLM Knows When its Lying
  10. Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases
  11. HALO: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models
  12. A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
  13. Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting
  14. Sources of Hallucination by Large Language Models on Inference Tasks
  15. Citation: A Key to Building Responsible and Accountable Large Language Models
  16. Zero-resource hallucination prevention for large language models
  17. RARR: Researching and Revising What Language Models Say, Using Language Models

Multilingual LLMs

  1. Hallucinations in Large Multilingual Translation Models

Domain-specific LLMs

  1. Med-HALT: Medical Domain Hallucination Test for Large Language Models
  2. ChatLawLLM

Image

  1. Evaluating Object Hallucination in Large Vision-Language Models
  2. Detecting and Preventing Hallucinations in Large Vision Language Models
  3. Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training
  4. Hallucination Improves the Performance of Unsupervised Visual Representation Learning

Video

  1. Let’s Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction
  2. Putting People in Their Place: Affordance-Aware Human Insertion into Scenes
  3. VideoChat : Chat-Centric Video Understanding

Audio

  1. LP-MusicCaps: LLM-Based Pseudo Music Captioning
  2. Audio-Journey: Efficient Visual+LLM-aided Audio Encodec Diffusion

About

A Survey of Hallucination in Large Foundation Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published