Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove mentions of RAG from the docs #7376

Merged
merged 2 commits into from Sep 24, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/source/index.rst
Expand Up @@ -231,7 +231,6 @@ conversion utilities for the following models:
model_doc/lxmert
model_doc/bertgeneration
model_doc/layoutlm
model_doc/rag
internal/modeling_utils
internal/tokenization_utils
internal/pipelines_utils
91 changes: 0 additions & 91 deletions docs/source/model_doc/rag.rst

This file was deleted.

21 changes: 0 additions & 21 deletions docs/source/model_summary.rst
Expand Up @@ -672,27 +672,6 @@ DPR consists in three models:

DPR's pipeline (not implemented yet) uses a retrieval step to find the top k contexts given a certain question, and then it calls the reader with the question and the retrieved documents to get the answer.

RAG
-----------------------------------------------------------------------------------------------------------------------

.. raw:: html

<a href="https://huggingface.co/models?filter=rag">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-rag-blueviolet">
</a>
<a href="model_doc/rag.html">
<img alt="Doc" src="https://img.shields.io/badge/Model_documentation-rag-blueviolet">
</a>

`Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks <https://arxiv.org/abs/2005.11401>`_,
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela

Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models.
RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs.
The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks.

The two models RAG-Token and RAG-Sequence are available for generation.

More technical aspects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
5 changes: 3 additions & 2 deletions utils/check_repo.py
Expand Up @@ -314,8 +314,9 @@ def check_repo_quality():
print("Checking all models are properly tested.")
check_all_decorator_order()
check_all_models_are_tested()
print("Checking all models are properly documented.")
check_all_models_are_documented()
# Uncomment me when RAG is back
# print("Checking all models are properly documented.")
# check_all_models_are_documented()


if __name__ == "__main__":
Expand Down