The open-source, multi-agent AI workspace β built for builders.
Demo β’ Features β’ Architecture β’ Getting Started β’ Tech Stack β’ Contributing
Openzess is a full-stack, provider-agnostic AI workspace that gives you complete control over how AI agents operate, collaborate, and integrate into your development workflow. Unlike closed ecosystems, Openzess lets you bring any LLM provider β Gemini, OpenAI, Anthropic, DeepSeek, Groq, Qwen, Ollama, and more β and orchestrates them through a unified, production-grade interface.
It ships with a multi-agent debate engine, parallel swarm execution, MCP protocol support, Tavern-compatible persona imports, background task scheduling, and a full tool-calling runtime β all wrapped in a polished React + FastAPI application.
Dark mode walkthrough β navigating Chat, Debate Arena, Tavern, and more.
Cinematic boot sequence with provider authentication flow.
| Feature | Description |
|---|---|
| Universal Provider Support | Gemini, OpenAI, Anthropic, Groq, DeepSeek, Qwen, GLM, Kimi, Ollama (local). Swap models at runtime. |
| Tool-Calling Runtime | Terminal execution, file I/O, code editing, web search, URL scraping β all with human-in-the-loop approval. |
| Streaming Chat | Real-time SSE-based streaming with full Markdown rendering and syntax highlighting. |
| Session Persistence | SQLite-backed conversation history with session management and cross-device hydration. |
| Feature | Description |
|---|---|
| Warroom Debate | Sequential multi-round debate engine β agents argue, critique, and reach consensus. A Judge synthesizes the verdict. |
| CollaborationRoom | Parallel swarm dispatch β up to 10 agents across different providers respond simultaneously. |
| Agent Personas | Pre-configured roles (Architect, Scraper, CodeGen) with full custom persona support. |
| Tavern Card Import | Import SillyTavern/TavernAI .png and .json character cards for multi-character roleplaying. |
| Feature | Description |
|---|---|
| MCP Protocol | Model Context Protocol support β connect external tools (GitHub, PostgreSQL, Filesystem, etc.) via stdio or SSE transports. |
| Background Workers | Cron job scheduler and filesystem watchdog for automated task execution. |
| Channels | Telegram and Discord bot bridges β extend Openzess conversations to messaging platforms. |
| Developer API | OpenAI-compatible and Anthropic-compatible REST endpoints (/v1/chat/completions, /v1/messages). |
| Feature | Description |
|---|---|
| Light & Dark Themes | Full dual-theme support with smooth transitions. |
| Graphify | Visual knowledge graph renderer for relationship mapping. |
| Canvas / Knowledge Base | Structured document viewer and content workspace. |
| TTS Engine | Built-in text-to-speech synthesis via gTTS. |
| Matrix Viewer | Virtual display streaming interface for remote desktop interaction. |
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FRONTEND (React + Vite) β
β β
β βββββββββββββ ββββββββββββ ββββββββββββββ ββββββββββββββββββββββββ β
β β Chat β β Debate β β Collab β β MCP / Skills / β β
β β Dashboard β β Arena β β Room β β Tavern / Settings β β
β βββββββ¬ββββββ ββββββ¬ββββββ βββββββ¬βββββββ ββββββββββββ¬ββββββββββββ β
β β β β β β
β βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββ β
β β REST + SSE β
ββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ
β BACKEND (FastAPI + Python) β
β β β
β ββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ β
β β server.py (API Router) β β
β βββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬ββββββββββ β
β β β β β β β β β β
β βββββΌβββββββΌβββββββΌβββββββΌβββββββΌβββββββΌβββββββΌβββββββΌβββββββ β
β βAgent ββSwarmββMCP ββCron ββTele-ββDisc-ββTTS ββDatabase β β
β βCore ββMgr ββMgr ββJobs ββgram ββord ββgTTS ββSQLite β β
β ββββ¬ββββββββ¬βββββββ¬ββββββββββββββββββββββββββββββββββββββββββ β
β β β β β
β ββββΌββββββββΌβββββββΌβββββββββββββββββββββββββββββββββββββββββββ β
β β LiteLLM Universal Provider Gateway β β
β β Gemini β OpenAI β Anthropic β DeepSeek β Groq β Ollama β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Layer | Technology | Responsibility |
|---|---|---|
| Frontend | React 18 + TypeScript + Vite | UI, routing, state management, SSE streaming |
| Backend | FastAPI + Python 3.11+ | API routing, agent orchestration, tool execution |
| Agent Core | LiteLLM + Custom Tool Runtime | Multi-provider LLM calls, function calling, approval flow |
| Swarm Manager | Async Python | Parallel multi-agent dispatch and debate orchestration |
| MCP Manager | stdio/SSE subprocess | External tool protocol connections |
| Database | SQLAlchemy + SQLite | Session storage, message history, persona management |
| Channels | python-telegram-bot, discord.py | Cross-platform messaging bridges |
| Memory | ChromaDB (vector store) | Semantic memory vault for agent context |
- Node.js β₯ 18
- Python β₯ 3.11
- Git
- At least one LLM API key (Gemini, OpenAI, etc.) β or use Ollama for fully local operation
git clone https://github.com/rosdebbu/openzess.git
cd openzesscp .env.example .envEdit .env and add your credentials:
GEMINI_API_KEY=your_gemini_api_key_here
DATABASE_URL=postgresql://openzess:password@localhost:5432/openzessNote: SQLite is used by default. PostgreSQL is optional for production deployments.
Backend:
python -m venv venv
venv\Scripts\activate # Windows
# source venv/bin/activate # Linux/macOS
pip install -r requirements.txtFrontend:
cd frontend
npm install
cd ..start.batchmod +x start_wsl.sh
./start_wsl.shTerminal 1 β Backend:
cd backend
uvicorn server:app --host 0.0.0.0 --reload --port 8000Terminal 2 β Frontend:
cd frontend
npm run devdocker-compose up --buildOpen your browser and navigate to:
http://localhost:5173
You will be greeted by the boot sequence. Select a provider and enter your API key to begin.
|
|
openzess/
βββ frontend/ # React + TypeScript SPA
β βββ src/
β βββ pages/ # 22 feature pages
β β βββ Chat.tsx # Main AI chat interface
β β βββ DebateArena.tsx # Sequential multi-agent debate
β β βββ WarRoom.tsx # Parallel swarm collaboration
β β βββ Tavern.tsx # Character persona imports
β β βββ Skills.tsx # Agent persona management
β β βββ MCP.tsx # Model Context Protocol grid
β β βββ Channels.tsx # Telegram & Discord bridges
β β βββ CronJobs.tsx # Background task scheduler
β β βββ Graphify.tsx # Knowledge graph viewer
β β βββ KnowledgeBase.tsx # Document canvas
β β βββ ... # 12 more feature pages
β βββ components/ # Sidebar, transitions, avatars
β βββ context/ # Theme, toast providers
β βββ utils/ # Persona definitions
β
βββ backend/ # FastAPI + Python services
β βββ server.py # Main API router (1000+ lines)
β βββ agent.py # LiteLLM agent with tool calling
β βββ database.py # SQLAlchemy models & queries
β βββ swarm_manager.py # Multi-agent orchestration
β βββ mcp_manager.py # MCP protocol handler
β βββ background_workers.py # Cron & watchdog services
β βββ telegram_worker.py # Telegram bot bridge
β βββ discord_worker.py # Discord bot bridge
β βββ tavern_parser.py # SillyTavern card importer
β βββ plugin_loader.py # Dynamic plugin system
β
βββ docs/assets/ # Screenshots & demo videos
βββ openzess-docs/ # Documentation site (Docusaurus)
βββ docker-compose.yml # PostgreSQL container
βββ start.bat # Windows launch script
βββ start_wsl.sh # Linux/WSL launch script
βββ .env.example # Environment template
βββ README.md
- API keys are stored client-side in
localStorageand transmitted per-request β never persisted server-side - Tool execution requires explicit human-in-the-loop approval before any terminal command runs
- MCP connections use subprocess isolation via stdio transport
- CORS is configured for local development β restrict
allow_originsin production - Environment variables isolate sensitive backend configuration
- Plugin Marketplace β Community-contributed agent skills and MCP servers
- Voice Interface β Real-time voice input/output with Whisper + TTS
- Cloud Deployment β One-click Render/Vercel deployment pipeline
- Multi-user Auth β Role-based access control for team environments
- Agent Memory β Persistent long-term memory across sessions via ChromaDB
- Mobile Responsive β Full mobile-first responsive layout
- Simulation Mode β Dry-run tool execution for safe testing
Contributions are welcome from AI engineers, full-stack developers, and open-source enthusiasts.
- Fork this repository
- Create a feature branch
git checkout -b feat/your-feature
- Commit changes with conventional commits
git commit -m "feat: add your feature" - Push to your branch
git push origin feat/your-feature
- Open a Pull Request
- Follow existing code style and component patterns
- Add TypeScript types for all new props/interfaces
- Test with at least two LLM providers before submitting
- Update documentation for user-facing changes
Built by @rosdebbu
Openzess is an open-source project created to give developers full ownership over their AI workspace β no vendor lock-in, no closed ecosystems, just raw control.
This project is licensed under the MIT License β see the LICENSE file for details.
Built with TypeScript, Python, and a lot of coffee. β










