diff --git a/apps/website/content/docs-v2/concepts/langgraph-basics.mdx b/apps/website/content/docs-v2/concepts/langgraph-basics.mdx
index 4a8e27d94..7e9e10536 100644
--- a/apps/website/content/docs-v2/concepts/langgraph-basics.mdx
+++ b/apps/website/content/docs-v2/concepts/langgraph-basics.mdx
@@ -1,65 +1,383 @@
# LangGraph Basics
-LangGraph is a framework for building stateful AI agents as directed graphs. This page explains the core concepts for Angular developers who are new to agent development.
+LangGraph is a framework for building stateful AI agents as directed graphs. If you're an Angular developer building AI-powered applications, this page teaches you how LangGraph agents work and why streamResource() is the natural bridge between your frontend and your agent backend.
-## Graphs, nodes, and edges
+
+Graphs give you explicit control over agent behavior. Instead of a black-box prompt-and-pray approach, you define exactly how your agent reasons, when it calls tools, and where it pauses for human input. Every step is visible, testable, and debuggable.
+
+
+## The Core Concepts
+
+A LangGraph agent has three building blocks:
+
+### Nodes — Functions That Do Work
+
+A node is a Python function that receives the current state, does something, and returns updated state. Every node has the same signature:
+
+```python
+def my_node(state: State, config: RunnableConfig) -> dict:
+ # Read from state
+ messages = state["messages"]
+
+ # Do work (call LLM, query DB, invoke tool)
+ response = llm.invoke(messages)
+
+ # Return state updates (merged into existing state)
+ return {"messages": [response]}
+```
+
+
+Nodes don't replace state — they return updates that get **merged** into the existing state. For lists like messages, LangGraph uses reducers (like `operator.add`) to accumulate entries instead of overwriting.
+
+
+### Edges — Connections Between Nodes
+
+Edges define the execution flow. There are two types:
+
+**Normal edges** — always route to the next node:
+```python
+builder.add_edge(START, "call_model") # Start → call_model
+builder.add_edge("call_model", END) # call_model → End
+```
+
+**Conditional edges** — route based on state:
+```python
+def should_continue(state: State) -> str:
+ last_msg = state["messages"][-1]
+ if last_msg.tool_calls:
+ return "tools" # Agent wants to use a tool
+ return END # Agent is done, return response
+
+builder.add_conditional_edges("call_model", should_continue)
+```
+
+### State — The Shared Memory
+
+All nodes read from and write to a shared state object. You define its shape as a Python `TypedDict`:
+
+```python
+from typing_extensions import TypedDict, Annotated
+from operator import add
+
+class State(TypedDict):
+ messages: Annotated[list, add] # Accumulates messages
+ plan: list[str] # Agent's current plan
+ results: dict # Tool results
+```
+
+This state is exactly what streamResource() exposes to your Angular app through Signals.
+
+## Building Your First Agent
+
+Here's the simplest possible agent — a chat model that takes messages and responds:
+
+
+
+
+```python
+from langgraph.graph import END, START, MessagesState, StateGraph
+from langchain_openai import ChatOpenAI
+
+llm = ChatOpenAI(model="gpt-5-mini")
-A LangGraph agent is a directed graph where:
+def call_model(state: MessagesState) -> dict:
+ response = llm.invoke(state["messages"])
+ return {"messages": [response]}
+
+# Build the graph: START → call_model → END
+builder = StateGraph(MessagesState)
+builder.add_node("call_model", call_model)
+builder.add_edge(START, "call_model")
+builder.add_edge("call_model", END)
+
+graph = builder.compile()
+```
+
+
+
+
+```json
+{
+ "dependencies": ["."],
+ "graphs": {
+ "chat_agent": "./src/chat_agent/agent.py:graph"
+ },
+ "env": ".env",
+ "python_version": "3.12"
+}
+```
+
+
+
+
+```typescript
+// This is all you need on the Angular side
+const chat = streamResource<{ messages: BaseMessage[] }>({
+ assistantId: 'chat_agent',
+});
+
+// chat.messages() updates as the agent streams its response
+// chat.status() tells you if it's idle, loading, or done
+```
+
+
+
+
+## Agent Patterns
+
+The power of LangGraph is in the patterns you can build. Each pattern maps to specific streamResource() signals.
+
+### Pattern 1: ReAct Agent (Tool Calling)
+
+The agent reasons, decides to call a tool, observes the result, and loops until it has an answer.
+
+```python
+from langgraph.prebuilt import ToolNode
+
+@tool
+def search_docs(query: str) -> str:
+ """Search the knowledge base."""
+ return vector_store.similarity_search(query)
+
+tools = [search_docs]
+
+def call_model(state: State) -> dict:
+ response = llm.bind_tools(tools).invoke(state["messages"])
+ return {"messages": [response]}
+
+def should_continue(state: State) -> str:
+ if state["messages"][-1].tool_calls:
+ return "tools"
+ return END
+
+builder = StateGraph(State)
+builder.add_node("model", call_model)
+builder.add_node("tools", ToolNode(tools))
+builder.add_edge(START, "model")
+builder.add_conditional_edges("model", should_continue)
+builder.add_edge("tools", "model") # Loop back after tool execution
+
+graph = builder.compile()
+```
+
+**Angular connection:** Track tool execution in real-time:
+```typescript
+const agent = streamResource({
+ assistantId: 'react_agent',
+});
+
+// Watch tools execute
+const activeTools = computed(() => agent.toolProgress());
+const completedTools = computed(() => agent.toolCalls());
+```
+
+### Pattern 2: Human-in-the-Loop (Approval)
+
+The agent proposes an action and pauses. Your Angular UI shows an approval dialog. The user decides, and the agent resumes.
+
+```python
+from langgraph.types import Interrupt
+
+def propose_action(state: State) -> dict:
+ action = llm.invoke(state["messages"])
+ # Pause execution — Angular will show approval UI
+ raise Interrupt(value={
+ "action": "send_email",
+ "to": "client@example.com",
+ "body": action.content,
+ })
+
+def execute_action(state: State) -> dict:
+ # Only runs after human approves
+ send_email(state["pending_action"])
+ return {"messages": [{"role": "assistant", "content": "Email sent."}]}
+```
+
+**Angular connection:** The interrupt surfaces automatically:
+```typescript
+const agent = streamResource({
+ assistantId: 'approval_agent',
+});
+
+// Show approval UI when agent pauses
+const pendingAction = computed(() => agent.interrupt());
+
+// User clicks approve → resume the agent
+approve() {
+ agent.submit(null, { resume: { approved: true } });
+}
+```
+
+### Pattern 3: Multi-Agent Orchestration
+
+A supervisor agent delegates work to specialist sub-agents. Each sub-agent is its own graph.
+
+```python
+def supervisor(state: State) -> dict:
+ routing = llm.invoke([
+ {"role": "system", "content": "Route to: researcher, analyst, or writer"},
+ *state["messages"]
+ ])
+ return {"next_agent": routing.tool_calls[0].args["agent"]}
+
+builder = StateGraph(State)
+builder.add_node("supervisor", supervisor)
+builder.add_node("researcher", researcher_subgraph)
+builder.add_node("analyst", analyst_subgraph)
+builder.add_conditional_edges("supervisor", lambda s: s["next_agent"])
+```
+
+**Angular connection:** Track each sub-agent independently:
+```typescript
+const orchestrator = streamResource({
+ assistantId: 'orchestrator',
+ subagentToolNames: ['researcher', 'analyst', 'writer'],
+});
+
+// See all active sub-agents
+const workers = computed(() => orchestrator.activeSubagents());
+const workerCount = computed(() => workers().length);
+```
+
+### Pattern 4: Persistent Conversations
+
+Thread-based persistence means conversations survive page refreshes, browser restarts, and even server deployments.
+
+```python
+from langgraph.checkpoint.postgres import PostgresSaver
+
+checkpointer = PostgresSaver.from_connection_string(DATABASE_URL)
+graph = builder.compile(checkpointer=checkpointer)
+
+# Each thread_id is a persistent conversation
+result = graph.invoke(
+ {"messages": [user_message]},
+ config={"configurable": {"thread_id": "user_123_session"}}
+)
+```
+
+**Angular connection:** Thread persistence is built into streamResource:
+```typescript
+const chat = streamResource({
+ assistantId: 'chat_agent',
+ threadId: signal(localStorage.getItem('threadId')),
+ onThreadId: (id) => localStorage.setItem('threadId', id),
+});
+
+// User returns tomorrow — same thread, full history restored
+// No code needed — streamResource handles it
+```
+
+## How streamResource() Bridges the Gap
+
+Here's why streamResource() is the natural Angular companion for LangGraph:
+
+
+
-
-Each node performs one action — calling an LLM, querying a database, or making an API request. Nodes receive state and return updated state.
+
+Calls `submit({ messages: [userMsg] })` to send user input
-
-Edges connect nodes. Conditional edges route execution based on state, enabling branching logic.
+
+Passes input to the transport layer
-
-All nodes read from and write to a shared state object. This state is what streamResource() exposes through its signals.
+
+Sends HTTP POST to LangGraph Platform, opens SSE connection
+
+
+Executes graph nodes, calls tools, streams SSE events back
+
+
+Parses SSE chunks into BehaviorSubjects
+
+
+Converts BehaviorSubjects to Angular Signals via `toSignal()`
+
+
+Templates re-render automatically via OnPush change detection
-## How streamResource connects
+
+
-Your Angular app doesn't run the graph — LangGraph Platform does. streamResource() is the bridge:
+```typescript
+// Every LangGraph concept maps to a Signal:
-1. Your component calls `submit()` with user input
-2. FetchStreamTransport sends an HTTP POST to LangGraph Platform
-3. The platform runs the graph and streams state updates via SSE
-4. streamResource() updates its Signals as events arrive
-5. Angular re-renders your templates automatically
+// Agent state values
+agent.value() // Signal — full state object
-## State design
+// Conversation
+agent.messages() // Signal — message history
-The generic type parameter in `streamResource()` defines your agent's state shape.
+// Lifecycle
+agent.status() // Signal — idle/loading/done
+agent.isLoading() // Signal — is the agent running?
-```typescript
-// Simple chat state
-streamResource<{ messages: BaseMessage[] }>({ ... })
-
-// Rich agent state with custom fields
-interface AgentState {
- messages: BaseMessage[];
- plan: string[];
- currentStep: number;
- results: Record;
-}
-streamResource({ ... })
+// Human-in-the-loop
+agent.interrupt() // Signal — agent is paused
+
+// Debugging
+agent.history() // Signal — checkpoint timeline
+agent.branch() // Signal — time-travel branch
+
+// Multi-agent
+agent.subagents() // Signal
+
+
+
+You don't configure SSE, parse events, manage WebSocket connections, or handle reconnection. streamResource() does all of that. You call `submit()` and read Signals — that's the entire API surface for your Angular code.
+## Graph API vs Functional API
+
+LangGraph offers two ways to define agents:
+
+**Graph API** (recommended for most cases):
+```python
+builder = StateGraph(State)
+builder.add_node("model", call_model)
+builder.add_edge(START, "model")
+graph = builder.compile()
+```
+
+**Functional API** (for simpler workflows):
+```python
+from langgraph.func import entrypoint, task
+
+@entrypoint
+async def agent(messages):
+ response = await call_model(messages)
+ return response
+```
+
+Both APIs produce the same output and work identically with streamResource(). Choose the Graph API when you need conditional routing, subgraphs, or interrupts. Choose the Functional API for simple, linear workflows.
+
## What's Next
- Understand the planning, tool-calling, and execution lifecycle.
+ Deep dive into the planning, tool-calling, and execution lifecycle
- Stream token-by-token responses from your LangGraph agent.
+ Stream token-by-token responses with multiple stream modes
+
+
+ Build human-in-the-loop approval flows
+
+
+ Compose multi-agent systems with orchestrators
+
+
+ Thread-based conversation persistence
- Learn how streamResource exposes agent state as Angular Signals.
+ How Signals power streamResource's reactive model