Skip to content

Intentface/ai-sdk-tutorial

Repository files navigation

AI SDK Tutorial

A hands-on tour of Vercel AI SDK v6 for the Intentface team. Seven small lessons, each on its own git branch. main contains them all.

Quickstart

pnpm install
cp .env.local.example .env.local        # add your OPENAI_API_KEY
pnpm dev

Open http://localhost:3000. The home page lists every lesson; /intro is the high-level "how LLMs work" primer.

Lesson index

Each branch is cumulativelesson-02-tools contains lesson 01 too. Switch with git checkout <branch> to see the diff.

# Branch Topic Path
00 main Repo intro + LLM primer /intro
01 lesson-01-basics useChat + streamText route /lesson-01-basics
02 lesson-02-tools Server-side tool calling /lesson-02-tools
03 lesson-03-client-tools Client-side tools (onToolCall + addToolOutput) /lesson-03-client-tools
04 lesson-04-tool-approval Human-in-the-loop tool approval /lesson-04-tool-approval
05 lesson-05-custom-data Custom data streaming (createUIMessageStream) /lesson-05-custom-data
06 lesson-06-message-types UIMessage vs ModelMessage /lesson-06-message-types
07 lesson-07-provider-options OpenAI reasoningSummary + reasoning models /lesson-07-provider-options

Tip: git diff lesson-01-basics lesson-02-tools shows exactly what each lesson adds.

How LLMs work (60 seconds)

A large language model is a function: (text in) → (probability distribution over next token). Given a prompt, the runtime samples one token, appends it, and runs the model again — that's why responses arrive token-by-token.

Key concepts you'll bump into:

  • Token. A chunk of text (~3–4 chars in English). Models bill and limit by tokens, not characters.
  • Context window. The max tokens (input + output) the model can see in one call. Everything beyond it is invisible.
  • Streaming. The server flushes tokens as they're generated so the UI can render incrementally.
  • Tool calling. The model emits a structured "I'd like to call weather({city: "Helsinki"})". Your code runs the tool and sends the result back; the model continues.
  • Reasoning models (o-series, gpt-5+). Produce hidden chain-of-thought tokens before the final answer. Slower, more expensive, much better at hard problems.
  • System / user / assistant / tool roles. The conversation format every chat model expects.

The interactive /intro page goes deeper with diagrams.

AI SDK in 60 seconds

The AI SDK is a TypeScript wrapper that gives you:

  • Provider-agnostic corestreamText({ model, messages, tools }) works the same against OpenAI, Anthropic, Gemini, etc. Swap the model argument.
  • useChat React hook — manages messages, status, streaming, and tool execution. You wire it to a Next.js route handler that returns streamText(...).toUIMessageStreamResponse().
  • Two message typesUIMessage (rich, with parts[] for tool calls, data, reasoning) and ModelMessage (plain content for the model). Convert with convertToModelMessages.
  • Tools — defined with tool({ inputSchema, execute }). Server tools run in your route; client tools run in the browser via onToolCall.

Common pitfalls

  • inputSchema, not parameters. Renamed in v6.
  • UIMessageModelMessage. Always run convertToModelMessages(messages) before passing to streamText.
  • Render parts[], not content. A v6 message has multiple parts (text, tool call, reasoning, data-...). Iterate over them.
  • Don't trust your training data on the AI SDK. v6 reorganized everything. Check node_modules/ai/docs/ or ai-sdk.dev when in doubt.

Out of scope (linked, not implemented)

These are worth knowing but not covered as lessons:

  • Structured outputgenerateObject / streamObject with a Zod schema, when you want JSON instead of free text.
  • Message persistence — the useChat hook is in-memory. To persist across reloads you save messages on submit/finish in a DB or localStorage. See the chatbot persistence guide.
  • Agent loops with stopWhen — letting the model auto-iterate over tool calls. See building agents.
  • AI Gateway / multi-provider — swap providers via string IDs (e.g. 'openai/gpt-5''anthropic/claude-sonnet-4') without re-importing.

About

Instructions how to use AI-SDK

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors