Skip to content

Using VLLM with Langchain for RAG purposes #22947

Answered by dosubot bot
sadrafh asked this question in Q&A
Discussion options

You must be logged in to vote

Hey there, @sadrafh! I'm here to help you out with any issues or questions you have. Let's squash those bugs together! 😊

To resolve the errors and properly integrate VLLM with Langchain for Retrieval-Augmented Generation (RAG) purposes, follow these steps:

  1. Install the vllm package:
    Ensure that the vllm package is installed in your environment:

    pip install vllm
  2. Use the VLLM class from LangChain:
    Instead of directly using the LLM class from vllm, use the VLLM class provided by LangChain. Here is an example:

    from langchain_community.llms.vllm import VLLM
    from langchain_core.prompts import PromptTemplate
    from langchain_core.runnables import RunnablePassthrough
    from langchain_core.output_…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@sadrafh
Comment options

Answer selected by sadrafh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant