Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Load vectorstore and store Locally and Build a retriever for RAG #21012

Closed
5 tasks done
Scottie-tech opened this issue Apr 29, 2024 · 0 comments
Closed
5 tasks done
Labels
Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module

Comments

@Scottie-tech
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

vectorstore = FAISS.load_local(f"./faiss_index", load_embeddings(), allow_dangerous_deserialization=True)
store = LocalFileStore(root_path=f"./store") 
store = create_kv_docstore(store)

retriever = ParentDocumentRetriever(
    vectorstore=vectorstore,
    docstore=store,
    child_splitter=child_splitter,
    parent_splitter=parent_splitter,
    search_kwargs={"k": 5}
)

rag_chain_from_docs = (
        RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
        | prompt
        | llm
        | StrOutputParser()
)

rag_chain_with_source = RunnableParallel(
    {"context": retriever, "question": RunnablePassthrough()} #  compression_retriever
).assign(answer=rag_chain_from_docs)
llm_response = rag_chain_with_source.invoke(query)
print(llm_response )

The above is the code for me to load the local vectorstore and docstore and build a retriever. Using this code to query the query cannot find any information, and the large language model only answers based on its own abilities without receiving any chunks from the knowledge base.

Error Message and Stack Trace (if applicable)

No response

Description

The above is the code for me to load the local vectorstore and docstore and build a retriever. Using this code to query the query cannot find any information, and the large language model only answers based on its own abilities without receiving any chunks from the knowledge base.

System Info

python==3.10
langchain 0.1.16
langchain-community 0.0.34
langchain-core 0.1.46
langchain-openai 0.1.4
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langdetect 1.0.9
langsmith 0.1.33

@dosubot dosubot bot added Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Apr 29, 2024
@eyurtsev eyurtsev removed the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Apr 29, 2024
@langchain-ai langchain-ai locked and limited conversation to collaborators Apr 29, 2024
@eyurtsev eyurtsev converted this issue into discussion #21031 Apr 29, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
Ɑ: retriever Related to retriever module Ɑ: vector store Related to vector store module
Projects
None yet
Development

No branches or pull requests

2 participants