Skip to content

A comprehensive survey on Internal Consistency and Self-Feedback in Large Language Models, including theoretical frameworks, task classifications, evaluation methods, future research directions and more!

Notifications You must be signed in to change notification settings

IAAR-Shanghai/ICSFSurvey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Internal Consistency and Self-Feedback in Large Language Models: A Survey

*Equal contribution.
Corresponding author: Zhiyu Li (lizy@iaar.ac.cn).

News

Introduction

Welcome to the GitHub repository for our survey paper titled "Internal Consistency and Self-Feedback in Large Language Models: A Survey." This repository contains all the resources, code, and references associated with the paper. Our goal is to provide a unified perspective on the self-evaluation and self-updating mechanisms in LLMs, encapsulated within the frameworks of Internal Consistency and Self-Feedback.

Article Framework

Our survey includes:

  • Theoretical Framework:
    • Internal Consistency: A framework that offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations in LLMs. It assesses the coherence among LLMs' latent layer, decoding layer, and response layer based on sampling methodologies.
    • Self-Feedback: Building on Internal Consistency, this framework includes two modules, Self-Evaluation and Self-Update, to enhance the model's response or the model itself.
  • Systematic Classification: Studies are categorized by tasks and lines of work related to Self-Feedback mechanisms.
  • Evaluation Methods and Benchmarks: Summarizes various evaluation methods and benchmarks used in the field to assess the effectiveness of Self-Feedback.
  • Critical Viewpoints: Explores significant questions such as "Does Self-Feedback Really Work?" and proposes hypotheses like the "Hourglass Evolution of Internal Consistency," "Consistency Is (Almost) Correctness," and "The Paradox of Latent and Explicit Reasoning."
  • Future Research Directions: Outlines promising directions for further exploration in the realm of Internal Consistency and Self-Feedback in LLMs.

Project Structure

  • code/: Contains the experimental code used in our survey.
  • data/: Includes the statistical data referenced in our survey.
  • figures/: Contains the figures used in this repository.
  • latex/: The LaTeX source files for our survey.
  • papers/: A comprehensive list of relevant papers.
  • README.md: This file, providing an overview of the repository.

Contribution

We welcome and appreciate contributions to enhance this repository. You can

  • add new papers relevant to Internal Consistency or Self-Feedback,
  • or suggest modifications to improve the survey.

Please submit an issue or a pull request with a brief description of your contribution, and we will review it promptly. Significant contributions may be acknowledged with your name included in the survey. Thank you for your support and collaboration.

Paper List

We provide a spreadsheet containing all the papers we reviewed: Literature. A more readable table format is working in progress.

To-Do List

  • Create the Page.
  • Improve paper list.