A hands-on Next.js project that walks you through building AI agents step by step — from a simple streaming chatbot all the way to a LangGraph-powered agent with database tools served over MCP.
Each tab in the UI corresponds to a progressively more advanced API route. This guide focuses on the API routes and the concepts behind them.
npm install
npm run devOpen http://localhost:3000 in your browser.
Note: You need an
OPENAI_API_KEYenvironment variable set. Create a.env.localfile:OPENAI_API_KEY=sk-...
Before diving into agents, it helps to understand the data they work with.
File: lib/db.ts
The app uses a single in-memory SQLite database (via better-sqlite3) that is created once when the Node.js process starts. It contains three business tables:
| Table | Columns |
|---|---|
inventory |
id, product_name, category, unit_price, stock_quantity, supplier, created_at |
customers |
id, first_name, last_name, email, city, joined_date |
sales |
id, inventory_id, customer_id, quantity_sold, sale_price, sale_date |
It also contains three internal tables used by the agents:
| Table | Purpose |
|---|---|
chat_sessions |
Tracks active chat sessions by UUID |
chat_messages |
Stores the full conversation history per session |
tool_calls |
Logs every SQL tool call an agent makes |
The View Database tab (app/api/database/route.ts) simply reads all three business tables and returns them as JSON so you can see the live state of the database.
API Route: app/api/chat/route.ts
Key library: Vercel AI SDK (ai, @ai-sdk/openai)
This is the simplest possible AI agent: a streaming chatbot with persistent memory.
User message → load history from SQLite → streamText(GPT-4o-mini) → stream response → save to SQLite
- The client sends
{ prompt, sessionId }toPOST /api/chat. initChatSession()(lib/chat-session.ts) ensures the session exists in the DB, saves the user message, and returns the full conversation history as an array of{ role, content }messages.streamText()from the AI SDK calls GPT-4o-mini with the history as context, producing a streaming response.onFinishsaves the assistant's reply back to the DB viasaveAssistantMessage().- The response is returned as a UI message stream (
result.toUIMessageStreamResponse()).
Memory is implemented manually — every message is written to chat_messages and the entire history is re-sent to the model on each turn. This is the simplest form of stateful chat.
// lib/chat-session.ts
export function initChatSession(db, sessionId, prompt) {
// 1. Create session if new
// 2. Save user message
// 3. Return full history for context window
}API Route: app/api/chat-with-tools/route.ts
Key library: Vercel AI SDK — tool(), streamText() with stopWhen
This agent can call functions (tools) to query or modify the database. This is the foundation of agentic behavior: the model decides when and how to use tools.
User message → LLM decides to call a tool → tool executes SQL → result fed back to LLM → final answer
Three tools are registered — one per database table:
tools: {
inventory: tool({
description: TOOL_DESCRIPTIONS.inventory,
inputSchema: sqlInputSchema, // { sql: string, params?: [] }
execute: makeSqlExecute("inventory", sessionId),
}),
customers: tool({ ... }),
sales: tool({ ... }),
}The model receives a system prompt describing the database schema and can call any tool by generating a structured JSON payload matching sqlInputSchema:
// lib/sql-tools.ts
export const sqlInputSchema = z.object({
sql: z.string(), // e.g. "SELECT * FROM inventory WHERE category = ?"
params: z.array(...), // e.g. ["Electronics"]
});makeSqlExecute() validates the SQL (only SELECT/INSERT/UPDATE allowed), runs it against the SQLite DB, logs the call to the tool_calls table, and returns the result.
stopWhen: stepCountIs(10) prevents infinite tool-call loops by capping the agent at 10 reasoning steps.
The AI SDK handles the agentic loop automatically:
- LLM generates a tool call
- SDK executes the tool
- Result is appended to the message history
- LLM is called again with the updated context
- Repeat until the LLM produces a plain text response (no tool calls)
API Route: app/api/chat-with-mcp/route.ts
MCP Server: app/api/mcp/[transport]/route.ts
Key library: @ai-sdk/mcp, mcp-handler, @modelcontextprotocol/sdk
This agent uses the same tools as Tab 2, but they are now served over the Model Context Protocol (MCP) — a standard protocol for exposing tools to AI models over HTTP.
Client → POST /api/chat-with-mcp
↓
createMCPClient connects to /api/mcp/mcp (HTTP transport)
↓
tools = await mcpClient.tools() ← discovers tools dynamically
↓
streamText(GPT-4o-mini, { tools })
↓
Tool calls are routed back through the MCP client to /api/mcp/mcp
createMcpHandler((server) => {
server.registerTool("inventory", { description, inputSchema }, makeMcpSqlExecute("inventory", sessionId));
server.registerTool("customers", { ... }, makeMcpSqlExecute("customers", sessionId));
server.registerTool("sales", { ... }, makeMcpSqlExecute("sales", sessionId));
});The [transport] dynamic segment means the same handler supports both GET /api/mcp/mcp (SSE) and POST /api/mcp/mcp (HTTP streaming), as required by the MCP spec.
The sessionId is passed via the x-session-id request header so tool calls can be attributed to the correct session.
| Inline Tools (Tab 2) | MCP Tools (Tab 3) |
|---|---|
| Defined in the same file as the route | Defined in a separate server |
| Tightly coupled to the agent | Discoverable by any MCP-compatible client |
| No network overhead | Communicates over HTTP |
| Simpler to set up | Reusable across multiple agents/apps |
MCP is useful when you want to share tools across multiple agents, or when tools are maintained by a different team or service.
API Route: app/api/chat-with-langchain/route.ts
Key libraries: @langchain/openai, @langchain/langgraph, @langchain/langgraph-checkpoint-sqlite
This tab introduces LangChain and LangGraph as an alternative to the Vercel AI SDK. It's a simple chatbot (no tools) that demonstrates LangGraph's built-in checkpointing for conversation memory.
// Reuse the existing SQLite connection for checkpointing
const checkpointer = new SqliteSaver(getDb());
const model = new ChatOpenAI({ model: "gpt-4o-mini", streaming: true });
const agent = createAgent({
model,
tools: [], // no tools — pure chat
systemPrompt: "...",
checkpointer, // LangGraph persists state automatically
});Each request streams events from the agent:
const eventStream = await agent.streamEvents(
{ messages: [new HumanMessage(prompt)] },
{ configurable: { thread_id: sessionId }, version: "v2" }
);
for await (const event of eventStream) {
if (event.event === "on_chat_model_stream") {
// stream token to client
}
}Instead of manually saving messages to the DB (as in Tab 1), LangGraph's SqliteSaver checkpointer automatically persists the full graph state (including message history) keyed by thread_id. On the next request with the same thread_id, LangGraph restores the state and continues the conversation.
API Route: app/api/chat-with-langgraph/route.ts
Key libraries: @langchain/langgraph, @langchain/core/tools, @langchain/langgraph-checkpoint-sqlite
This is the most advanced tab. It combines LangGraph's StateGraph with database tools, giving you full control over the agent's reasoning loop as an explicit graph.
START → llmCall → (conditional) → toolNode → llmCall → ... → END
The graph is built manually:
new StateGraph(MessagesState)
.addNode("llmCall", llmCall) // calls the LLM
.addNode("toolNode", toolNode) // executes tool calls
.addEdge(START, "llmCall")
.addConditionalEdges("llmCall", shouldContinue, {
toolNode: "toolNode", // if LLM made tool calls → run tools
[END]: END, // if LLM gave a final answer → stop
})
.addEdge("toolNode", "llmCall") // after tools run → back to LLM
.compile({ checkpointer });llmCall node — invokes the model with the current message history:
const llmCall = async (state) => {
const response = await modelWithTools.invoke([
new SystemMessage("..."),
...state.messages,
]);
return { messages: [response] };
};toolNode node — executes any tool calls the LLM requested:
const toolNode = async (state) => {
const lastMessage = state.messages.at(-1); // AIMessage with tool_calls
const results = [];
for (const toolCall of lastMessage.tool_calls) {
const tool = toolsByName[toolCall.name];
results.push(await tool.invoke(toolCall));
}
return { messages: results }; // ToolMessages appended to state
};shouldContinue function — the conditional edge logic:
const shouldContinue = (state) => {
const last = state.messages.at(-1);
if (last.tool_calls?.length) return "toolNode"; // keep going
return END; // done
};| Vercel AI SDK (Tab 2) | LangGraph StateGraph (Tab 5) |
|---|---|
| Agentic loop is handled by the SDK | You define the loop as a graph |
| Less code, less control | More code, full control |
| Hard to add custom logic between steps | Easy to add nodes (e.g., validation, logging) |
| Good for standard tool-calling patterns | Good for complex multi-step workflows |
LangGraph shines when you need branching logic, parallel tool execution, human-in-the-loop steps, or other non-linear agent behaviors.
Shared across Tabs 2, 3, and 5. Provides:
sqlInputSchema— Zod schema for tool inputs ({ sql, params })makeSqlExecute(tableName, sessionId)— Returns an async function that validates and runs SQL, then logs the call totool_callsmakeMcpSqlExecute(tableName, sessionId)— Same as above but wraps the result in MCP's{ content: [{ type: "text", text }] }formatTOOL_DESCRIPTIONS— Shared natural-language descriptions of each table's tool, used in all three agent variants
Used by Tabs 1 and 2 (Vercel AI SDK routes). Provides:
initChatSession(db, sessionId, prompt)— Creates the session if new, saves the user message, returns full historysaveAssistantMessage(db, sessionId, text)— Saves the assistant reply and updatesupdated_at
Tab 1: Basic AI Agent
POST /api/chat
└── streamText (AI SDK) + manual SQLite memory
Tab 2: AI Agent with Tools
POST /api/chat-with-tools
└── streamText (AI SDK) + inline tool() definitions + SQLite memory
Tab 3: AI Agent with MCP
POST /api/chat-with-mcp
└── streamText (AI SDK) + MCP client → GET/POST /api/mcp/mcp
└── createMcpHandler (mcp-handler)
Tab 4: Basic LangChain Agent
POST /api/chat-with-langchain
└── LangChain createAgent + SqliteSaver checkpointer (no tools)
Tab 5: LangGraph with Tools
POST /api/chat-with-langgraph
└── LangGraph StateGraph (llmCall ↔ toolNode) + SqliteSaver checkpointer
| Package | Purpose |
|---|---|
ai |
Vercel AI SDK core — streamText, tool |
@ai-sdk/openai |
OpenAI provider for the AI SDK |
@ai-sdk/react |
React hooks (useCompletion, useChat) |
@ai-sdk/mcp |
MCP client for the AI SDK |
mcp-handler |
MCP server handler for Next.js API routes |
@modelcontextprotocol/sdk |
Official MCP TypeScript SDK |
@langchain/openai |
LangChain OpenAI integration |
@langchain/langgraph |
LangGraph StateGraph and agent primitives |
@langchain/langgraph-checkpoint-sqlite |
SQLite checkpointer for LangGraph |
better-sqlite3 |
Synchronous SQLite driver for Node.js |
zod |
Schema validation for tool inputs |