This example demonstrates how to stream text and objects from a FastAPI backend to a Next.js frontend with 3 different approaches:
- Text stream with Vercel AI SDK
useChat
hook - Object stream as text with
useObject
hook - Partial json object stream with custom hook implementation
Original application taken from: https://github.com/vercel-labs/ai-sdk-preview-use-object
Backend FastAPI client library uses my Simple AI Agents library but the text streaming example can be replaced with any other API and the object stream examples could be replaced with Instructor patched client.
The FastAPI server is served from the /api
directory and next.config.mjs
is configured to rewrite requests to /api/:path*
.
On local development, the FastAPI server is served on localhost:8000. I deployed the FastAPI backend to a standalone server using the Dockerfile
and docker-compose.yml
files. Alternatively, when deployed to Vercel, the FastAPI server is deployed as Python serverless functions - though there seems to be an issue with the streaming responses.
text-stream.mp4
expense-text-stream.mp4
expense-object-stream.mp4
Run create-next-app
with npm, Yarn, or pnpm to bootstrap the example:
npx create-next-app --example https://github.com/timlrx/next-fastapi-object-stream
yarn create next-app --example https://github.com/timlrx/next-fastapi-object-stream
pnpm create next-app --example https://github.com/timlrx/next-fastapi-object-stream
To run the example locally you need to:
- Sign up for API key with OpenAI or Github. Thanks to LiteLLM 100+ models are supported, but not all providers support object streaming.
- Obtain API keys for each provider.
- Set the required environment variables as shown in the
.env.example
file, but in a new file called.env
. npm install
andpip install -r requirements.txt
to install the required dependencies.npm run dev
to launch the development server.