Skip to content

Conversation

@azaddhirajkumar
Copy link
Contributor

No description provided.

@azaddhirajkumar azaddhirajkumar requested a review from a team as a code owner November 11, 2025 07:28
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @azaddhirajkumar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a new tutorial designed to walk developers through the creation of an AI-driven PDF chat application. The tutorial highlights how to effectively combine Couchbase Vector Search for document retrieval, LangChain for managing large language model interactions and data processing, and Streamlit for a user-friendly interface. It provides practical guidance on implementing Retrieval-Augmented Generation (RAG) to enhance LLM responses with document context and demonstrates performance improvements through LLM response caching.

Highlights

  • New Tutorial: Introduces a comprehensive guide for building a PDF Chat Application using Couchbase Vector Search.
  • Technology Stack: Demonstrates integration of Couchbase Vector Search (Query Service), LangChain, OpenAI LLMs, and Streamlit for an AI-powered chat experience.
  • Core Concepts: Explains and implements Retrieval-Augmented Generation (RAG) for contextual Q&A and LLM response caching with Couchbase.
  • Vector Indexing: Details the use of Hyperscale and Composite Vector Indexes in Couchbase and programmatic index creation.
  • LangChain Integration: Showcases LangChain's capabilities for PDF processing, embedding generation, vector store integration, and building complex chains with LCEL.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new tutorial for building a PDF chat application using Couchbase Vector Search with the Query Service, LangChain, and Python. The tutorial is well-structured and comprehensive. I've provided several review comments to enhance clarity, fix minor errors in code snippets and text, and improve formatting. Addressing these points will make the tutorial easier for users to follow and help them avoid potential issues.

@azaddhirajkumar azaddhirajkumar changed the title Added tutorial for Couchbase Vector Search Added tutorial for Langchain Couchbase Vector Search Nov 11, 2025
Copy link
Contributor

@nithishr nithishr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please double check all the links as LangChain documentation has changed quite a bit.
Ideally, we should also update the FTS tutorial with the same changes.

@@ -0,0 +1,593 @@
---
# frontmatter
path: "/tutorial-python-langchain-pdf-chat-query"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The URL & title needs to be updated to the new terminology

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


### LangChain Expression Language (LCEL)

We will now utilize the power of LangChain Chains using the [LangChain Expression Language](https://python.langchain.com/docs/expression_language/) (LCEL). LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dead link


### Create Retriever Chain

We also create the [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/vectorstore) of the couchbase vector store. This retriever will be used to retrieve the previously added documents which are similar to current query.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dead link

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

- Create a placeholder for streaming the assistant's response.
- Use the chain.invoke(question) method to generate the response from the RAG chain.
- The response is automatically cached by the CouchbaseCache layer.
- [Stream](https://python.langchain.com/docs/use_cases/question_answering/streaming/) the response in real-time using the custom `stream_string` function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants