diff --git a/notebooks/llm-rag-llamaindex/llm-rag-llamaindex.ipynb b/notebooks/llm-rag-llamaindex/llm-rag-llamaindex.ipynb index 237c542ddd1..1b039f9daa1 100644 --- a/notebooks/llm-rag-llamaindex/llm-rag-llamaindex.ipynb +++ b/notebooks/llm-rag-llamaindex/llm-rag-llamaindex.ipynb @@ -186,6 +186,7 @@ "- [**bge-reranker-v2-m3**](https://huggingface.co/BAAI/bge-reranker-v2-m3)\n", "- [**bge-reranker-large**](https://huggingface.co/BAAI/bge-reranker-large)\n", "- [**bge-reranker-base**](https://huggingface.co/BAAI/bge-reranker-base)\n", + "\n", "Reranker model with cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model.\n", "\n", "You can also find available LLM model options in [llm-chatbot](../llm-chatbot/README.md) notebook.\n"