Describe any page you want in the URL bar, and an AI agent will generate it — fully interactive, entirely simulated. A translation app. A hangman game. A sales dashboard. Thirty seconds ago it didn't exist. There's no codebase, no deployment, no build step. The agent produces it at runtime.
Why did I make this? Full blog here: When the Model Is the Machine - AI agents, runtime software, and what comes after SaaS.
You interact with it — click buttons, submit forms, type into inputs — and every interaction routes back to the agent, which decides how to update the page. The conversation is the state. The model is the runtime.
Yes, this is a party trick. It's slow. It's a concept. But it points at something real.
Built with Strands Agents and Claude on Amazon Bedrock.
You'll need:
- Python 3.14+
- uv
- AWS credentials configured in your environment (via
aws configureor environment variables) with access to Amazon Bedrock and theanthropic.claude-opus-4-6-v1model
Then:
uv run python main.pyOpen http://localhost:3000. Type a description or pick an example from the landing page.
It's a single Python file running a standard library HTTP server. The agent has exactly two tools: one that generates an HTML page, and one that tells the browser to swap out pieces of the DOM. That's it. There's no framework, no component library, no state management system.
- You visit
/?prompt=language+translation+app. - The server creates a session with a Claude agent and serves a lightweight shell page — an empty
<div>, a spinner, and vanilla JS that knows how to receive and render content. - The shell page fires a POST back to the server with the prompt. The agent generates HTML, CSS, and a title. The server returns them as JSON. The shell injects it into the DOM. The spinner fades. The app appears.
- You interact. The shell captures events through event delegation — form submissions, clicks, keypresses, select changes — formats them as structured text messages, and sends them back to the agent.
- The agent receives each interaction as the next turn in its conversation. It has full context of what it generated and what IDs exist. It responds with either a full re-render or targeted element updates.
No JavaScript is generated by the agent. There is no application state anywhere except inside the model's context window.
- It's slow. Every interaction is a full round trip to the model — latency in seconds, not milliseconds.
- Sessions live in memory. They're never cleaned up, there's no persistence, and they're lost when the server restarts.
- No auth or rate limiting. Anyone who can reach the server can create sessions.
- Single hardcoded model. You need Bedrock access to
anthropic.claude-opus-4-6-v1specifically. - No streaming. Pages generate in full before being sent, so you wait.
- Prompt injection in URLs is not fully sanitized. The
promptquery parameter is embedded in a script tag with minimal escaping.