Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help: Example on how to use the assistant with Vector DB ? #557

Closed
pedroresende opened this issue Apr 4, 2024 · 9 comments
Closed

Help: Example on how to use the assistant with Vector DB ? #557

pedroresende opened this issue Apr 4, 2024 · 9 comments

Comments

@pedroresende
Copy link

It would be great to have an example on how we can integrate an assistant with a RAG.
Even though the documentation mentions that "Assistants can be configured with an LLM of your choice (currently only OpenAI), any vector search database and easily extended with additional tools." (https://github.com/andreibondarev/langchainrb/blob/main/lib/langchain/assistants/assistant.rb#L5) there is no simple example on how we can do it

@andreibondarev
Copy link
Collaborator

andreibondarev commented Apr 5, 2024

@pedroresende I created a tool that wraps any vector search database. Take a glance at the diff here: https://github.com/andreibondarev/langchainrb/compare/add-vectorsearch-wrapper-tool?expand=1.

This is how I tested it out:

# This could be any LLM. It'll be used to embed documents and query.
llm = Langchain::LLM::Ollama.new url: ENV['OLLAMA_URL']

# Initialize the vectorsearch db
chroma = Langchain::Vectorsearch::Chroma.new(url: ENV["CHROMA_URL"], index_name: "docs", llm: llm)

# Add documents to it
chroma.create_default_schema
chroma.add_data paths: [
  # I imported this file: https://www.coned.com/-/media/files/coned/documents/small-medium-large-businesses/gasyellowbook.pdf
  Langchain.root.join("./file1.pdf"),
  Langchain.root.join("./file2.pdf")
]

# Initialize the tool that will be passed to the Assistant
vectorsearch_tool = Langchain::Tool::Vectorsearch.new(vectorsearch: chroma)

# Initialize the Assistant
assistant = Langchain::Assistant.new(
  llm: Langchain::LLM::OpenAI.new(api_key: ENV['OPENAI_API_KEY']),
  thread: Langchain::Thread.new,
  # It's up to you to explain the Assistant when it should be accessing the vectorsearch DB. You could even tell it to access it every single time before answering the question.
  instructions: "You are a chat bot that helps users find information from the Con Edison Yellow Book that you have stored in your vector search database. Feel free to refer to it when answering questions.",
  tools: [
    vectorsearch_tool
  ]
)

# Ask away!
assistant.add_message_and_run content: "...", auto_tool_execution: true

I would really really love your feedback on the approach.

@pedroresende
Copy link
Author

@andreibondarev it worked perfectly fine, thanks for the help. The only strange thing that occurs, sometimes it's the following error

image

@andreibondarev
Copy link
Collaborator

@andreibondarev it worked perfectly fine, thanks for the help. The only strange thing that occurs, sometimes it's the following error

image

I called the tool Langchain::Tool::Vectorsearch.

@pedroresende
Copy link
Author

I know you did, I've renamed it to try to debug if it was clashing for some reason but I'm getting exactly the same error

@andreibondarev
Copy link
Collaborator

I know you did, I've renamed it to try to debug if it was clashing for some reason but I'm getting exactly the same error

Make sure to modify the NAME constant as well:

module Langchain::Tool
  class Vectorsearchtool < Base
    NAME = "vectorsearchtool"

@pedroresende
Copy link
Author

I know you did, I've renamed it to try to debug if it was clashing for some reason but I'm getting exactly the same error

Make sure to modify the NAME constant as well:

module Langchain::Tool
  class Vectorsearchtool < Base
    NAME = "vectorsearchtool"

Sure thing

@andreibondarev
Copy link
Collaborator

Resolved.

@Jellyfishboy
Copy link

Jellyfishboy commented Jun 19, 2024

This currently does not work.

If I use .ask it will return a relevant response.

If I use the Vectorsearch tool inside an assistant, it just tells me it can't find the relevant information.

I ended up creating my own tool and directly using .ask method, which resulted in relevant information consistently being provided.

@andreibondarev
Copy link
Collaborator

@Jellyfishboy Could you please show me how you were using the Langchain::Tool::Vectorsearch tool?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants