AI-powered collaborative character storytelling
Create AI characters · Build worlds · Generate multi-chapter stories together
opencharacterbook is the open-source release of CharacterBook, a full-stack platform for creating AI characters, building worlds, and generating multi-chapter stories with LLM-powered prose generation.
| You bring | CharacterBook returns |
|---|---|
| character ideas, personalities, and world settings | a structured generation pipeline that turns them into multi-chapter stories |
| an LLM provider API key (OpenAI-compatible) | a configurable backend that works with any OpenAI-compatible provider |
| curiosity about AI storytelling systems | a readable full-stack architecture with clear separation of concerns |
- At a Glance
- Why CharacterBook
- What Makes CharacterBook Different
- Architecture
- Screenshots
- Quick Start
- Tech Stack
- Project Structure
- Key Design Decisions
- Open Source Scope
- Read Next
- Contributing
- License
Most AI character products still treat storytelling as a single prompt-and-response loop. CharacterBook explores a richer model: reusable characters, evolving relationships, persistent narrative context, and a runtime that can generate both on demand and proactively over time.
The project focuses on three ideas:
- Agent-to-agent storytelling (A2A): characters interact with each other to produce narrative content, not just user-to-character chat
- Proactive character development: a scheduler can initiate generation so characters continue evolving between user actions
- Explore and discovery: characters are shareable, browsable, and remixable instead of staying trapped inside one private session
CharacterBook is designed to:
- let users create reusable characters with rich personalities, backstories, and traits
- generate stories through structured pipelines where multiple characters interact as agents
- maintain narrative memory and context across chapters
- evolve characters through scheduled generation, so they grow over time
- stream generation in real time over SSE so the experience feels live
Characters act as autonomous agents inside a story world. Generation is driven by multi-character dynamics, not only by a user prompt.
Characters do not have to wait for user input. The proactive scheduler can initiate interactions, grow relationships, and keep a world moving forward.
Characters are shareable, discoverable entities. Users can browse public creations, explore story seeds, and build on each other's work.
Story creation is split into seed resolution, context assembly, strategy execution, evaluation, and post-processing so each stage can evolve independently.
The runtime treats memory, chapter history, and live SSE streaming as first-class concerns. Generation feels immediate without dropping continuity.
- User → Frontend: character creation, story workspace, and discovery
- Request → CharacterBook Engine: user actions trigger the generation pipeline where the Orchestrator and Scheduler feed the Agent
- Agent → Artifacts: generated chapters and narrative content flow back as artifacts and are assembled into stories
- Context → Agent: Memory provides narrative history; Skills define generation strategies
These screens show the current prototype surface: account entry, creation flows, discovery, and the story workspace.
Login · Home · Character Detail
Explore · Create Character · New Story
Story workspace with live chapter output
Tip
Prerequisites: Python 3.12+, Node.js 20.9+, and an OpenAI-compatible LLM_API_KEY. The default local flow uses AUTH_MODE=dev, so no external auth setup is required.
git clone https://github.com/OffAtom-Lab/opencharacterbook.git
cd opencharacterbookcd backend
python -m venv .venv
source .venv/bin/activate
pip install -e '.[dev]'
cp .env.example .env
# Edit .env and set LLM_API_KEY at minimum
alembic upgrade head
uvicorn app.main:app --reloadThe API runs at http://localhost:8000. In dev mode (AUTH_MODE=dev), use dev-token as a bearer token.
cd frontend
npm install
npm run devThe frontend runs at http://localhost:3000. No .env.local file is required for the default local flow; API_PROXY_TARGET already falls back to http://localhost:8000. If your backend runs elsewhere, create .env.local and set API_PROXY_TARGET manually.
| Layer | Technology |
|---|---|
| Frontend | Next.js 16, React 19, Tailwind CSS, Radix UI, Framer Motion |
| Backend | Python 3.12+, FastAPI, SQLAlchemy 2, Alembic, Pydantic v2 |
| LLM | Any OpenAI-compatible API |
| Database | PostgreSQL (production) / SQLite (local dev, zero setup) |
| Auth | Built-in dev auth or Supabase Auth (optional) |
opencharacterbook/
├── docs/
│ ├── images/ README and product visuals
│ └── DESIGN_SYSTEM.md Visual language reference
├── LICENSE
├── README.md
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
├── SECURITY.md
├── backend/
│ ├── app/ FastAPI application
│ │ ├── auth/ Authentication and authorization
│ │ ├── characters/ Character CRUD and discovery
│ │ ├── feed/ Home feed and discovery surfaces
│ │ ├── generation/ Seed resolution, orchestration, SSE streaming
│ │ ├── llm/ Provider abstraction
│ │ ├── memory/ Story and character memory strategies
│ │ ├── scheduler/ Proactive generation
│ │ ├── stories/ Story and chapter management
│ │ └── users/ User profiles
│ ├── alembic/ Database migrations
│ ├── tests/ Backend test suite (15 test files)
│ └── README.md Backend-specific docs
└── frontend/
├── src/
│ ├── app/ Next.js pages and layouts
│ ├── components/ React UI components
│ ├── hooks/ Custom hooks
│ ├── lib/ API client and utilities
│ ├── styles/ Global styles
│ └── types/ TypeScript definitions
└── README.md Frontend-specific docs
- Separation of concerns in generation: seed resolution decides what to generate, orchestration decides when and how to persist, and the generation pipeline executes prose creation
- Strategy pattern throughout: generation, evaluation, and memory strategies are pluggable without changing the runtime shell
- SSE for real-time streaming: user-triggered generation streams inline over Server-Sent Events, and disconnects cancel work cleanly
- Dev-friendly auth:
AUTH_MODE=devprovides hardcoded test tokens so you can work locally without external auth setup - Database flexibility: SQLite works for zero-setup local development, while PostgreSQL is the production path
This release is meant to be runnable, inspectable, and extensible. We are open-sourcing CharacterBook to share reusable patterns around seed resolution, generation orchestration, proactive scheduling, and narrative memory.
- complete backend application with the generation pipeline
- complete frontend application with character, story, and discovery UIs
- database migrations and seed scripts
- backend test suite
- configurable provider boundaries for OpenAI-compatible models
- setup and architecture documentation for local development
- an OpenAI-compatible LLM provider API key
- PostgreSQL for production or the built-in SQLite path for development
- Supabase credentials if you want real external auth instead of dev auth
- backend/README.md: backend runtime shape, configuration, migrations, and deployment notes
- frontend/README.md: frontend setup, structure, and API proxy behavior
- docs/DESIGN_SYSTEM.md: design tokens, surfaces, state colors, and UI patterns
- CONTRIBUTING.md: contribution workflow and collaboration guidelines
- SECURITY.md: vulnerability reporting expectations
- CODE_OF_CONDUCT.md: community standards
See CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License.







