-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do you create combability with ConversationalRetrievalChain #58
Comments
I guess my question is how do you change the inputs and outputs to use Lanarky and FastAPI... |
Figured out the issue. |
@Haste171 it would be great if you post the solution too |
@talhaanwarch I managed to get this working with following. I am using Weaviate as a vector store. I struggled quite alot trying to get this work, and im still not 100% sure why it works with this library instead of just using langchain, will need to dig into their codebase abit more. In the meantime you can refer to this example: from dotenv import load_dotenv
from fastapi import FastAPI
from fastapi.templating import Jinja2Templates
from langchain.chains import ConversationalRetrievalChain, LLMChain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.vectorstores.weaviate import Weaviate
from lanarky import LangchainRouter
import weaviate
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
PromptTemplate,
)
from langchain.memory import ConversationBufferMemory
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
load_dotenv()
app = FastAPI(title="ConversationalRetrievalChainDemo")
chatTemplate = """
Answer the question based on the chat history(delimited by <hs></hs>) and context(delimited by <ctx> </ctx>) below.
-----------
<ctx>
{context}
</ctx>
-----------
<hs>
{chat_history}
</hs>
-----------
Question: {question}
Answer:
"""
PROMPT = PromptTemplate(
input_variables=["context", "question", "chat_history"], template=chatTemplate
)
def create_chain():
weaviate_client = weaviate.Client("http://localhost:8080")
vectorstore: Any = Weaviate(weaviate_client, "Idx_664773d4e6", "text")
question_generator = LLMChain(
llm=ChatOpenAI(
temperature=0,
streaming=True,
),
prompt=PROMPT,
)
doc_chain = load_qa_chain(
llm=ChatOpenAI(
temperature=0,
streaming=True,
),
chain_type="stuff",
)
memory = ConversationBufferMemory(
return_messages=True,
memory_key="chat_history",
max_token_limit=20,
prompt=PROMPT,
)
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0,
streaming=True,
),
retriever=vectorstore.as_retriever(),
memory=memory,
combine_docs_chain_kwargs={"prompt": PROMPT},
verbose=True,
)
return chain
chain = create_chain()
langchain_router = LangchainRouter(
langchain_url="/chat", langchain_object=chain, streaming_mode=0
)
app.include_router(langchain_router)
if __name__ == "__main__":
import uvicorn
uvicorn.run(host="0.0.0.0", port=8000, app=app) |
The following code snippet is an example of how you can stream a response from Langchain's
ConversationalRetrievalChain
into the console but I don't understand how you can add compatibility to Lanarky. This documentation doesn't make a whole lot of sense to me: https://lanarky.readthedocs.io/en/latest/advanced/custom_callbacks.htmlThe text was updated successfully, but these errors were encountered: