Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how can i use it in langchain? #31

Closed
whm233 opened this issue Dec 21, 2023 · 2 comments
Closed

how can i use it in langchain? #31

whm233 opened this issue Dec 21, 2023 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@whm233
Copy link

whm233 commented Dec 21, 2023

code like this:

prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
kc = RetrievalQA.from_llm(llm=qwllm, retriever=compression_retriever, prompt=prompt)

@iofu728
Copy link
Contributor

iofu728 commented Dec 21, 2023

Hi @whm233, thank you for your support and interest in LLMLingua. Although I'm not an expert in LangChain, based on my experience, I believe its usage in LangChain should be similar to that in LlamaIndex, i.e., operating at the Postprocessor-level or reranker-level.

I briefly reviewed the LangChain pipeline and think you'll need to extend the BaseDocumentCompressor to implement a class similar to CohereRerank, as seen here: https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/retrievers/document_compressors/cohere_rerank.py.

Afterward, use ContextualCompressionRetriever() to replace the compression_retriever.

Thank you again for your interest.

@iofu728 iofu728 self-assigned this Dec 21, 2023
@iofu728 iofu728 added the question Further information is requested label Dec 21, 2023
hwchase17 added a commit to langchain-ai/langchain that referenced this issue Feb 28, 2024
**Description**: This PR adds support for using the [LLMLingua project
](https://github.com/microsoft/LLMLingua) especially the LongLLMLingua
(Enhancing Large Language Model Inference via Prompt Compression) as a
document compressor / transformer.

The LLMLingua project is an interesting project that can greatly improve
RAG system by compressing prompts and contexts while keeping their
semantic relevance.

**Issue**: microsoft/LLMLingua#31
**Dependencies**: [llmlingua](https://pypi.org/project/llmlingua/)

@baskaryan

---------

Co-authored-by: Ayodeji Ayibiowu <ayodeji.ayibiowu@getinge.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
@iofu728
Copy link
Contributor

iofu728 commented Feb 28, 2024

Thanks to @thehapyone's contribution, LLMLingua is now available in Langchain. You can follow this notebook for guidance.

@iofu728 iofu728 closed this as completed Feb 28, 2024
gkorland pushed a commit to FalkorDB/langchain that referenced this issue Mar 30, 2024
…i#17711)

**Description**: This PR adds support for using the [LLMLingua project
](https://github.com/microsoft/LLMLingua) especially the LongLLMLingua
(Enhancing Large Language Model Inference via Prompt Compression) as a
document compressor / transformer.

The LLMLingua project is an interesting project that can greatly improve
RAG system by compressing prompts and contexts while keeping their
semantic relevance.

**Issue**: microsoft/LLMLingua#31
**Dependencies**: [llmlingua](https://pypi.org/project/llmlingua/)

@baskaryan

---------

Co-authored-by: Ayodeji Ayibiowu <ayodeji.ayibiowu@getinge.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants