Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Backend Environment Variables
DATABASE_URL=postgresql://postgres:password@postgres:5432/todoapp

# OpenAI API Key for chatbot (required)
# Get your API key from https://platform.openai.com/api-keys
OPENAI_API_KEY=your_openai_api_key_here

# Frontend Environment Variables
API_BASE_URL=http://backend:8000
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ venv.bak/
*.sqlite3

# Environment variables
.env.local
.env.production
.env
.env.*

# Docker
.dockerignore
Expand Down
4 changes: 4 additions & 0 deletions backend/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from typing import List
import os
from dotenv import load_dotenv
from routers import chat

load_dotenv()

Expand Down Expand Up @@ -60,6 +61,9 @@ class Config:
allow_headers=["*"],
)

# Include routers
app.include_router(chat.router)

# Dependency to get DB session
def get_db():
db = SessionLocal()
Expand Down
2 changes: 2 additions & 0 deletions backend/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,6 @@ dependencies = [
"debugpy",
"sqlalchemy>=2.0.43",
"pydantic>=2.11.9",
"langgraph",
"langchain-openai",
]
90 changes: 90 additions & 0 deletions backend/routers/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Backend Routers

This directory contains FastAPI routers that organize the API endpoints by feature.

## Chat Router (`chat.py`)

The chat router implements a LangGraph-based conversational AI using OpenAI's GPT models with **streaming support**.

### Architecture

```
User Message → FastAPI Router → LangGraph StateGraph → OpenAI LLM (streaming) → SSE Response
```

### LangGraph Flow

```
START
[chatbot node]
↓ (streams from OpenAI)
END
```

### Streaming Implementation

The chat endpoint uses:
- **LangGraph's `.astream()` method** with `stream_mode="messages"` for token-by-token streaming
- **FastAPI's `StreamingResponse`** to send Server-Sent Events (SSE)
- **OpenAI's streaming mode** enabled in the ChatOpenAI client

This provides real-time token-by-token response streaming for a better user experience.

### Key Components

- **State**: Manages conversation messages using LangGraph's message history
- **LLM**: ChatOpenAI with gpt-4.1-mini model (streaming enabled)
- **Graph**: Simple linear flow that can be extended with additional nodes
- **Streaming**: Server-Sent Events (SSE) format for real-time responses

### Configuration

Set the `OPENAI_API_KEY` environment variable in the `.env` file:

```bash
OPENAI_API_KEY=your_openai_api_key_here
```

Get your API key from: https://platform.openai.com/api-keys

### Extending the Chatbot

The LangGraph architecture makes it easy to add:

1. **Conversation Memory**: Add a checkpointer to maintain state across requests
2. **RAG (Retrieval Augmented Generation)**: Add a retrieval node before the chatbot
3. **Tool Calling**: Add tool nodes for external API calls
4. **Multi-Agent**: Add specialized agent nodes for different tasks
5. **Guardrails**: Add validation nodes for content filtering

### Example Extension

```python
# Add a retrieval node for RAG
def retrieve_context(state: State):
query = state["messages"][-1].content
docs = vector_store.similarity_search(query)
return {"context": docs}

graph_builder.add_node("retrieve", retrieve_context)
graph_builder.add_edge(START, "retrieve")
graph_builder.add_edge("retrieve", "chatbot")
```

### Streaming Details

The endpoint uses LangGraph's message streaming mode:
- Streams individual tokens as they're generated
- Uses Server-Sent Events (SSE) format
- Each event contains a JSON payload with the content chunk
- Frontend can display tokens in real-time using `st.write_stream()`

## Future Routers

Additional routers can be added for:
- User management (`users.py`)
- Authentication (`auth.py`)
- Analytics (`analytics.py`)
- Admin operations (`admin.py`)
1 change: 1 addition & 0 deletions backend/routers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Routers module
86 changes: 86 additions & 0 deletions backend/routers/chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from fastapi import APIRouter
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
import os
import json

# Router setup
router = APIRouter(
prefix="/chat",
tags=["chat"]
)

# Pydantic Models
class ChatMessage(BaseModel):
message: str

class ChatResponse(BaseModel):
response: str

# LangGraph State
class State(TypedDict):
messages: Annotated[list, add_messages]

# Initialize the LLM with streaming enabled
llm = ChatOpenAI(
model="gpt-4.1-mini",
base_url=os.environ["OPENAI_BASE_URL"],
api_key=os.environ["OPENAI_API_KEY"],
temperature=0.7,
streaming=True
)

# Define the chatbot node
def chatbot(state: State):
"""
Simple chatbot node that calls the LLM with the conversation history.
"""
return {"messages": [llm.invoke(state["messages"])]}

# Build the LangGraph
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()

@router.post("")
async def chat(message: ChatMessage):
"""
Chat endpoint that provides streaming AI assistant responses using LangGraph and OpenAI.

This implements a simple conversational AI that streams responses token-by-token.
For multi-turn conversations with persistent state, additional session management would be needed.
"""
async def generate_stream():
"""
Generator function that streams the response from LangGraph.
Uses the 'messages' stream mode to get message updates.
"""
# Stream the graph with the user's message
async for event in graph.astream({
"messages": [("user", message.message)]
}, stream_mode="messages"):
# event is a tuple of (message, metadata)
# We only want the message content chunks
msg, metadata = event

# Only stream content from AIMessage (not HumanMessage)
if hasattr(msg, 'content') and msg.content:
# Stream each chunk as a JSON line
chunk_data = {"content": msg.content}
yield f"data: {json.dumps(chunk_data)}\n\n"

return StreamingResponse(
generate_stream(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
}
)
83 changes: 83 additions & 0 deletions compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
volumes:
proto_db_volume:

services:
postgres:
image: postgres:17
environment:
POSTGRES_DB: proto_db
POSTGRES_USER: proto_db_user
POSTGRES_PASSWORD: proto_db_password
volumes:
- proto_db_volume:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready"]
interval: 30m
timeout: 30s
retries: 3
start_period: 15s
start_interval: 1s

flyway:
depends_on:
postgres:
condition: service_healthy
build:
context: .
dockerfile: database/Dockerfile
environment:
FLYWAY_URL: jdbc:postgresql://postgres:5432/proto_db
FLYWAY_USER: proto_db_user
FLYWAY_PASSWORD: proto_db_password

backend:
depends_on:
flyway:
condition: service_completed_successfully
build:
context: .
dockerfile: backend/Dockerfile
environment:
DATABASE_URL: postgresql://proto_db_user:proto_db_password@postgres:5432/proto_db
OPENAI_API_BASE: ${OPENAI_API_BASE}
OPENAI_API_KEY: ${OPENAI_API_KEY}
HF_TOKEN: ${HF_TOKEN}
TAVILY_API_KEY: ${TAVILY_API_KEY}
ports:
- "8000:8000" # Serving port
- "5678:5678" # Debugging port
develop:
watch:
- path: backend
target: /apps/backend
action: sync
command: uv run -m debugpy --listen 0.0.0.0:5678 -m fastapi dev main.py --host 0.0.0.0 --port 8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000"]
interval: 30m
timeout: 30s
retries: 3
start_period: 15s
start_interval: 1s

frontend:
depends_on:
backend:
condition: service_healthy
build:
context: .
dockerfile: frontend/Dockerfile
environment:
API_BASE_URL: http://backend:8000
ports:
- "8501:8501" # Serving port
- "5679:5679" # Debugging port
develop:
watch:
- path: frontend
target: /apps/frontend
action: sync
command: uv run -m debugpy --listen 0.0.0.0:5679 -m streamlit run main.py --server.address 0.0.0.0 --server.port 8501

10 changes: 10 additions & 0 deletions database/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM flyway/flyway:latest

# Set working directory
WORKDIR /flyway

# Copy migration scripts
COPY ./database/flyway/sql ./sql

# Default command to run migrations
CMD [ "migrate" ]
13 changes: 0 additions & 13 deletions database/Dockerfile.liquibase

This file was deleted.

11 changes: 11 additions & 0 deletions database/flyway/sql/V001__create_todos_table.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
-- Create todos table
CREATE TABLE IF NOT EXISTS todos (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
description TEXT DEFAULT '',
completed BOOLEAN NOT NULL DEFAULT FALSE
);

-- Indexes
CREATE INDEX IF NOT EXISTS idx_todos_id ON todos(id);
CREATE INDEX IF NOT EXISTS idx_todos_title ON todos(title);
6 changes: 0 additions & 6 deletions database/init.sql

This file was deleted.

3 changes: 0 additions & 3 deletions database/liquibase/changelogs/db.changelog-master.yaml

This file was deleted.

Loading