-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can Document
metadata be passed into prompts?
#1136
Comments
Another format for retrieving text with metadata could be:
Or maybe even:
This way when asking questions, I can ask things like |
I have a number of different uses cases where this would also be helpful. I considered just adding the metadata directly to the text before embedding, but that's not ideal. |
Not 100% sure whether applicable to your case, but if you are using the stuff chain, you can do this by adjusting the
if there is an |
Wow that's cool, didn't know about that kwarg! Thanks, will try this 😃 |
This won't change the docs grabbed by the retriever right? For example if I have a guest (Greg) stored in the metadata and I ask "what did Greg say", the retriever won't take the guest into account when grabbing the source and use it to match on something like similarity. |
No, that's just for the refinement of the context documents by the LLM part. |
Is there a way i could do the same with a ConversationalRetrievalChain?
` |
@joe-barhouch Did you solve this? I want to use metadata as an input_variable but it only seems to allow 'context', which is page_content. |
@Robs-Git-Hub had to step back from Conversational Agents. The layer of abstraction helps with prototypes but hurts full fledged apps. I ended up implementing my own version with LLMChain with a memory. All of the document retrieval is taken care of by immediately calling At the end of the day the RAG application just copy paste the results to the prompt, so I just handled it on my own without the need of the abstraction layer of Conversation Agents |
Thanks for the quick reply. Very helpful, and I was reaching a similar conclusion. |
for ConversationalRetrievalChain
|
@theekshanamadumal |
What is difference between "prompt" and "document_prompt"? |
Yes. you should know what are the metadata fields in the document before creating the document prompt. |
document prompt is the Prompt template used to organize content in retrieved documents. |
Hi, @batmanscode! I'm helping the LangChain team manage their backlog and am marking this issue as stale. It looks like you opened this issue to discuss passing Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you for your understanding and contribution to LangChain! |
Hello, I am looking for similar use case. I am extracting some metadata using 'similarity_search'. Now I want to use this to another QA chain. Can you show me the code snippet you used? |
Here is an example:
metadata = {"guest": guest_name}
question = "which guests have talked about <topic>?"
Using
VectorDBQA
, this could be possible if{context}
contained text + metadataThe text was updated successfully, but these errors were encountered: