Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 3, 2025

Implements a full-stack chatbot interface with the UI in Streamlit and AI chat logic using LangGraph and OpenAI in a dedicated FastAPI router, as requested in issue #10. The chatbot provides a conversational UI/UX similar to the Streamlit chat tutorial, powered by OpenAI's GPT-4.1-mini model through a LangGraph-based architecture with real-time streaming responses.

Changes

New Files

  • frontend/pages/chat.py: A new page implementing the chatbot interface with a modular, well-factored structure:

    • initialize_chat_history(): Manages session state initialization
    • display_chat_messages(): Renders chat history
    • get_assistant_response_stream(): Streams responses from backend API in real-time
    • handle_user_input(): Processes user input and displays streaming responses
    • show(): Main page rendering function
  • backend/routers/chat.py: Dedicated FastAPI router with LangGraph + OpenAI streaming integration:

    • LangGraph StateGraph for conversation flow management
    • OpenAI ChatGPT (gpt-4.1-mini) integration with streaming enabled
    • Server-Sent Events (SSE) for real-time token streaming
    • Pydantic models for type-safe API contracts
    • Extensible architecture ready for RAG, tools, and multi-agent systems
  • backend/routers/__init__.py: Router module initialization

  • backend/routers/README.md: Comprehensive documentation of the chat architecture, streaming implementation, extension points, and configuration

Modified Files

  • frontend/main.py: Added chatbot page to the navigation menu

  • backend/main.py: Refactored to use router pattern:

    • Removed inline chat endpoint
    • Added router registration: app.include_router(chat.router)
    • Cleaner separation of concerns
  • backend/pyproject.toml: Added dependencies:

    • langgraph: For conversation flow management
    • langchain-openai: For OpenAI integration
  • .env: Added OPENAI_API_KEY configuration with documentation

Architecture

The chatbot follows a professional full-stack architecture with industry-standard patterns and real-time streaming:

  1. Frontend (Streamlit): Handles UI/UX, chat display, session management, and user input with streaming support
  2. Backend (FastAPI Router): Dedicated /chat router with LangGraph + OpenAI streaming integration
  3. LangGraph Flow: START → [chatbot node] → (OpenAI LLM streaming) → END
  4. Streaming Protocol: Server-Sent Events (SSE) for real-time token-by-token delivery
  5. API Communication: Frontend makes POST requests and receives streaming SSE responses

LangGraph Streaming Implementation

The chatbot uses LangGraph's streaming capabilities with Streamlit's streaming display:

  • State Management: TypedDict with message history using LangGraph's add_messages
  • Graph Flow: Uses .astream() with stream_mode="messages" for token-by-token streaming
  • LLM Integration: ChatOpenAI with gpt-4.1-mini model and streaming=True
  • SSE Protocol: FastAPI's StreamingResponse sends tokens as Server-Sent Events
  • Frontend Display: Streamlit's st.write_stream() renders tokens in real-time
  • Extensibility: Foundation ready for RAG, tool calling, conversation memory, and multi-agent systems

This separation of concerns provides several benefits:

  • Real-time AI-powered responses with token-by-token streaming
  • Better user experience with immediate feedback
  • Backend can be scaled independently
  • Middleware (authentication, rate limiting, etc.) can be added to the router
  • Business logic is isolated and testable
  • Easy to extend with additional LangGraph nodes and capabilities

Features

The chatbot implementation includes:

  • Chat Interface: Uses Streamlit's st.chat_message() and st.chat_input() components for a native chat experience
  • Real-Time Streaming: Tokens appear as they're generated using st.write_stream() for better UX
  • Session Management: Chat history persists during the session using Streamlit's session state
  • LangGraph + OpenAI Backend: Real AI-powered streaming responses using GPT-4.1-mini through a LangGraph StateGraph
  • Server-Sent Events: Industry-standard SSE protocol for reliable streaming
  • Router Pattern: Clean, modular FastAPI router for chat functionality
  • Clear History: Button to reset the conversation and start fresh
  • Error Handling: Graceful handling of backend connection failures

Configuration

Users need to set the OPENAI_API_KEY in their .env file:

OPENAI_API_KEY=your_openai_api_key_here

API keys can be obtained from: https://platform.openai.com/api-keys

Implementation Details

The code follows best practices for full-stack applications:

  • Modular Design: Chat logic isolated in dedicated router (backend/routers/chat.py)
  • Router Pattern: Industry-standard FastAPI router organization
  • Streaming: LangGraph's .astream() + FastAPI's StreamingResponse + Streamlit's st.write_stream()
  • Type Safety: Pydantic models for request/response validation
  • LangGraph Architecture: Professional conversation flow management
  • Extensible: Foundation ready for advanced features (RAG, tools, memory, multi-agent)
  • Well-documented: Comprehensive docstrings and architecture documentation
  • Consistent: Follows existing codebase patterns

Streaming Flow

User Input 
  → Frontend (Streamlit)
    → Backend API (SSE streaming)
      → LangGraph .astream()
        → OpenAI (token streaming)
      ← Tokens streamed back via SSE
    ← st.write_stream() displays tokens
  ← Real-time response appears

Extension Points

The LangGraph implementation makes it easy to add:

  • Conversation Memory: Add a checkpointer to maintain state across requests
  • RAG (Retrieval Augmented Generation): Add retrieval nodes for context-aware responses
  • Tool Calling: Add tool nodes for external API integration
  • Multi-Agent Systems: Add specialized agent nodes for different tasks
  • Guardrails: Add validation nodes for content filtering

Screenshots

Initial chatbot interface:

Multi-turn conversation with streaming OpenAI integration:

Backend successfully processing streaming chat requests:

Testing

The chatbot has been manually tested and verified to:

  • ✅ Render correctly in the navigation menu
  • ✅ Accept and display user messages
  • ✅ Successfully call backend /chat API endpoint with streaming
  • ✅ Generate AI-powered streaming responses using OpenAI
  • ✅ Display tokens in real-time as they're generated
  • ✅ LangGraph StateGraph compiles and executes with streaming enabled
  • ✅ Router properly registered with FastAPI app
  • ✅ Server-Sent Events (SSE) streaming works correctly
  • ✅ Maintain conversation history during the session
  • ✅ Clear history when requested
  • ✅ Handle backend connection errors gracefully

Fixes #10

Original prompt

This section details on the original issue you should resolve

<issue_title>Add chatbot experience</issue_title>
<issue_description>We should add a basic chatbot experience to the frontend. The chatbot should be similar to what is described in this streamlit tutorial. We should stub the LLM call just like in the tutorial and focus just on the UI/UX. The streamlit code should be well factored and modular, such that it is easy to read and find where in the code things are rendered and/or events are handled.</issue_description>

Comments on the Issue (you are @copilot in this section)

Fixes #10

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Copilot AI changed the title [WIP] Add chatbot experience Add chatbot experience to frontend Oct 3, 2025
Copilot AI requested a review from lambdakris October 3, 2025 04:03
Copilot finished work on behalf of lambdakris October 3, 2025 04:03
@lambdakris
Copy link
Owner

@copilot this is looking good. Let's go ahead and extract the logic in get_assistant_response function into the FastAPI app in the backend and wire it up to the frontend.

Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 3, 2025

@copilot this is looking good. Let's go ahead and extract the logic in get_assistant_response function into the FastAPI app in the backend and wire it up to the frontend.

Done! The chatbot logic has been extracted to a new /chat endpoint in the FastAPI backend, and the frontend now calls this endpoint. The stubbed LLM response logic is now centralized in the backend, making it easy to replace with a real LLM API in the future.

Changes in commit 4f7f154:

  • Backend: Added /chat POST endpoint with ChatMessage and ChatResponse models
  • Frontend: Updated to call backend API instead of local logic
  • Architecture now properly separates concerns between UI and business logic

Backend Integration

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • checkip.amazonaws.com
    • Triggering command: /usr/bin/python3 /home/REDACTED/.local/bin/streamlit run main.py --server.port 8501 --server.headless true (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Copilot AI changed the title Add chatbot experience to frontend Add chatbot experience with backend API integration Oct 3, 2025
Copilot finished work on behalf of lambdakris October 3, 2025 05:08
@lambdakris
Copy link
Owner

@copilot Let's take it one step further. Move the chat function in backend/main.py into a dedicated FastAPI router and let's replace the stub LLM call with a real but very simple LangGraph based chatbot that uses an OpenAI LLM.

Copilot AI and others added 2 commits October 3, 2025 05:46
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 3, 2025

@copilot Let's take it one step further. Move the chat function in backend/main.py into a dedicated FastAPI router and let's replace the stub LLM call with a real but very simple LangGraph based chatb...

Done! The chat functionality has been refactored into a dedicated FastAPI router and now uses a real LangGraph-based chatbot with OpenAI's GPT-4o-mini model.

Changes in commits 38f0ca1 and 265f645:

New Structure:

  • backend/routers/chat.py - Dedicated router with LangGraph + OpenAI integration
  • backend/routers/README.md - Architecture documentation
  • Updated backend/main.py to use router pattern
  • Added langgraph and langchain-openai dependencies

Architecture:

START → [chatbot node] → (OpenAI LLM) → END

Configuration Required:
Add OPENAI_API_KEY to .env file (template added with instructions)

Benefits:

  • Clean separation: Chat logic isolated in dedicated router
  • Real AI: Replaced keyword matching with actual OpenAI GPT
  • Extensible: LangGraph foundation ready for RAG, tools, memory, multi-agent systems
  • Maintainable: Industry-standard router pattern with comprehensive docs

The LangGraph implementation provides a solid foundation for future enhancements like conversation persistence, retrieval-augmented generation, or tool calling.

Copilot AI changed the title Add chatbot experience with backend API integration Add chatbot with LangGraph and OpenAI integration Oct 3, 2025
Copilot finished work on behalf of lambdakris October 3, 2025 05:48
@lambdakris
Copy link
Owner

@copilot I made a few changes but overall things look good. Let's try one more adjustment to this PR. Modify the chat frontend and backend to work with streaming responses based on the LangGraph support for streaming and the [Streamlit support for streaming[(https://docs.streamlit.io/develop/api-reference/write-magic/st.write_stream).

Co-authored-by: lambdakris <15176003+lambdakris@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 3, 2025

@copilot I made a few changes but overall things look good. Let's try one more adjustment to this PR. Modify the chat frontend and backend to work with streaming responses based on the [LangGraph supp...

Implemented! The chatbot now supports real-time streaming responses using LangGraph's .astream() method and Streamlit's st.write_stream().

Changes in commit 0a5b835:

Backend Streaming:

  • Enabled streaming=True in ChatOpenAI configuration
  • Switched from .invoke() to .astream() with stream_mode="messages"
  • Implemented Server-Sent Events (SSE) via FastAPI's StreamingResponse
  • Streams tokens in JSON format: data: {"content": "token"}\n\n

Frontend Streaming:

  • Created get_assistant_response_stream() generator to parse SSE events
  • Uses st.write_stream() to display tokens in real-time
  • Removed spinner (tokens appear immediately)

Benefits:

  • Real-time token-by-token display for better UX
  • Faster perceived performance
  • Standard SSE protocol for reliable streaming
  • Leverages LangGraph's native streaming capabilities

The implementation follows the LangGraph streaming documentation and Streamlit's st.write_stream() patterns for a smooth, professional streaming experience.

Copilot AI changed the title Add chatbot with LangGraph and OpenAI integration Add chatbot with LangGraph, OpenAI integration, and streaming responses Oct 3, 2025
Copilot finished work on behalf of lambdakris October 3, 2025 06:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add chatbot experience

2 participants