Skip to content

Sylviaming/RM-CW3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Literature Review on Retrieval-Augmented Generation (RAG) Research

This SurVis interface compiles 10 representative papers in the popular research field of Retrieval-Augmented Generation (RAG). These papers cover key aspects such as basic architectures, retrieval strategies, application domains, and evaluation methods, aiming to provide comprehensive references for researchers in the RAG field.

I. Papers Related to Basic Architectures

  1. "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks"
    • Authors: Patrick Lewis et al.
    • Published in: NeurIPS 2020
    • Core Contribution: Pioneered the RAG framework, combining pre-trained parametric and non-parametric memories for language generation. This laid the foundation for subsequent research and demonstrated significant effectiveness in knowledge-intensive natural language processing tasks.
    • DOI: 10.48550/arXiv.2005.11401
  2. "RARR: Researching and Revising What Language Models Say, Using Language Models"
    • Authors: Luyu Gao et al.
    • Published in: ACL 2023
    • Core Contribution: Proposed the RARR system, which uses language models to automatically find justifications and edit the outputs of text generation models, addressing the limitations of single-shot retrieval and enhancing the reliability of generated content.
    • DOI: 10.48550/arXiv.2210.08726
  3. "Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection"
    • Authors: Akari Asai et al.
    • Published in: ICLR 2024
    • Core Contribution: Introduced a self-reflection mechanism, enabling the model to autonomously determine when to retrieve knowledge and evaluate the quality of generated content. This enhanced the model's capabilities in knowledge utilization and generation accuracy.
    • DOI: 10.48550/arXiv.2310.11511

II. Papers Related to Retrieval Strategies

  1. "In-Context Retrieval-Augmented Language Models"
    • Authors: Ori Ram et al.
    • Published in: TACL 2023
    • Core Contribution: Explored effective methods for integrating retrieved information in context. By preprocessing documents, it improved the performance of language models, providing new ideas for RAG in context understanding and information utilization.
    • DOI: 10.48550/arXiv.2302.00083
  2. "Retrieval Augmented Generation and Understanding in Vision: A Survey and New Outlook"
    • Authors: Xu Zheng et al.
    • Published in: 2025
    • Core Contribution: Provided a comprehensive review of retrieval-augmented techniques in the computer vision field, covering visual understanding and visual generation tasks. It also explored their applications in embodied AI, pointed out current limitations, and looked ahead to future directions.
    • DOI: 10.48550/arXiv.2503.18016

III. Papers Related to Application Domains

  1. "FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation"
    • Authors: Tu Vu et al.
    • Published in: EMNLP 2023
    • Core Contribution: Utilized search engines to address the issue of knowledge timeliness in large language models. The proposed FreshPrompt method effectively improved the model's performance in dynamic question-answering benchmarks.
    • DOI: 10.48550/arXiv.2310.03214
  2. "Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy"
    • Authors: Zhihong Shao et al.
    • Published in: 2023
    • Core Contribution: Proposed the iterative retrieval-generation synergy method ITER-RETGEN. It performed outstandingly in tasks such as multi-hop question answering, fact verification, and commonsense reasoning, outperforming some baseline methods.
    • DOI: 10.48550/arXiv.2305.15294

IV. Papers Related to Evaluation Methods

  1. "Benchmarking Large Language Models in Retrieval-Augmented Generation"
    • Authors: Akari Chen et al.
    • Published in: NeurIPS 2023
    • Core Contribution: Established a benchmark for evaluating RAG, analyzing the noise robustness, negative sample rejection, information integration, and counterfactual robustness of different large language models in RAG tasks. This provided important references for model evaluation.
    • DOI: 10.48550/arXiv.2309.01431
  2. "ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems"
    • Authors: Weijia Shi et al.
    • Published in: SIGIR 2023
    • Core Contribution: Introduced the ARES framework for automatically evaluating RAG systems. It evaluated RAG systems from dimensions such as context relevance, answer faithfulness, and answer relevance, reducing reliance on manual annotations and showing accurate performance in cross-domain evaluations.
    • DOI: 10.48550/arXiv.2311.09476

V. Comprehensive Review Paper

"Retrieval-Augmented Generation for AI-Generated Content: A Survey" - Authors: Penghao Zhao et al. - Published in: 2024 - Core Contribution: Provided a comprehensive review of the application of RAG in the AI-Generated Content (AIGC) scenario, covering basic classification, enhancement methods, practical applications, benchmark testing, limitations, and future research directions. - DOI: 10.48550/arXiv.2402.19473

VI. Instructions for Use

  1. This SurVis interface is designed to provide researchers in the RAG field with a convenient tool for literature review and analysis. Users can quickly browse the core points, research contributions, and key conclusions of each paper through the interface.
  2. For in-depth research, users can click on the DOI link of the paper to obtain the full text for detailed reading and study.
  3. It is recommended that users refer to the papers in different categories according to their research interests and needs, sort out the development context of RAG technology, and explore potential research directions.

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages