Self-hosted AI platform with multi-model chat, persistent memory,
and an extensible skills engine. One Python file. Zero data leakage.
Whether you're shipping code, managing knowledge, or securing AI for your team.
|
Self-host your AI backend
|
Your AI, your data, your server
|
AI infrastructure for your team
|
Every feature, transparent. Nothing hidden behind a login wall.
Claude for deep coding. Grok for live web research. Local models for free, private inference. Each conversation targets a specific model — run them all simultaneously.
Use your existing subscriptions. Claude Pro/Max, ChatGPT Plus/Pro — Apex connects through your existing accounts. Only Grok requires a separate API key. Local models are free.
Supported Models
| Provider | Models | Connection |
|---|---|---|
| Claude (Anthropic) | opus-4-6, sonnet-4-6, haiku-4-5 |
Agent SDK — uses existing subscription |
| Codex (OpenAI) | gpt-5.4, gpt-5.3, o3, o4-mini |
CLI — uses existing subscription |
| Grok (xAI) | grok-4, grok-4-fast |
API key — pay per use |
| Local (Ollama/MLX) | Qwen, Gemma, Llama, Mistral, etc. | Local — zero cost, no internet |
Note: Using existing subscriptions through Apex is for personal, non-commercial use only. For commercial use, use the providers' API plans.
The War Room — four agents (Operations, Architect, Codex, Designer) collaborating in a single conversation. Each agent has its own model, its own persona, its own specialty. Direct them with @mentions. See cost and token usage in real time.
The sidebar tells the story: dedicated channels for Claude, Grok, Codex, a trading room, a marketing agent, local models — an entire AI organization in one interface.
Same experience on mobile. Three agents spinning simultaneously on an iPhone — native SwiftUI, not a web view.
From zero to running in under 2 minutes.
# Install
pip install fastapi uvicorn python-multipart claude-agent-sdk
# Run
python3 apex.pyOpen https://localhost:8300. Your Claude subscription is detected automatically.
Add local models (free)
# Install Ollama (https://ollama.ai)
ollama pull qwen3.5
# Start Apex — Ollama is detected automatically
python3 apex.pyCreate a Claude channel for heavy tasks, an Ollama channel for quick questions.
Add Grok (web search)
export XAI_API_KEY=xai-...
python3 apex.pyFull stack with mTLS
export APEX_ENABLE_WHISPER=1
export XAI_API_KEY=xai-...
export GOOGLE_API_KEY=AIza...
APEX_SSL_CERT=cert.pem APEX_SSL_KEY=key.pem APEX_SSL_CA=ca.pem python3 apex.pyAll models active. Memory with semantic search. Whisper injection. mTLS auth.
Fresh install. Three AI providers — Claude, Grok, and ChatGPT — collaborating in a group chat, @mentioning each other, completing a task loop. Guided onboarding channel visible in the sidebar. This is what you get in half an hour.
The Apex server and web app are free and open source. Run it on your machine, use it in your browser — no limits, no costs beyond your own AI subscriptions.
Native apps for iOS, Android, and Desktop are premium — they provide the secure remote access layer that turns your self-hosted server into a mobile-ready AI platform.
| Free | Apex Pro | |
|---|---|---|
| Apex Server | ✅ | |
| Web App (Desktop & Mobile) | ✅ | |
| All AI Models | ✅ | |
| Memory, Skills, Dashboard | ✅ | |
| mTLS / Certificate Auth | ✅ | |
| iOS App (native SwiftUI) | ✅ $29.99/mo · $249/yr | |
| Android App (native Kotlin) | ✅ Coming soon | |
| Desktop App (Electron) | ✅ Planned | |
| Lifetime License (first 500) | ✅ $499 one-time |
The web app works great on localhost. But when you want to reach your server from your phone on the train, from your laptop at a coffee shop, or from your iPad on the couch — you need:
- Certificate-pinned mTLS — your phone authenticates to your server with a client certificate, not a password
- Push notifications — real-time alerts from your AI, your cron jobs, your monitoring
- Background survival — the connection stays alive when you switch apps
- Gesture navigation — swipe between channels, long-press for actions, native scroll physics
That's the difference between a browser tab and a real app. The native experience is the premium.
| Platform | Status | Tier |
|---|---|---|
| 🌐 Web App (Desktop) | ✅ Available | Free |
| 🌐 Web App (Mobile) | ✅ Available | Free |
| 📱 iOS (iPhone) | ✅ Available | Premium |
| 🤖 Android | 🚧 In Development | Premium |
| 🖥️ Desktop (Electron) | 📋 Planned | Premium |
server/
├── apex.py ← entry point, startup, router registration
├── ws_handler.py ← WebSocket connections, streaming, session mgmt
├── agent_sdk.py ← Claude SDK integration, auth, turn execution
├── backends.py ← Codex, Grok, Ollama/MLX dispatch
├── model_dispatch.py ← model routing and provider selection
├── routes_chat.py ← chat REST endpoints
├── routes_alerts.py ← alert ingestion, APNs push
├── routes_profiles.py ← persona management
├── routes_models.py ← model listing and config
├── routes_setup.py ← guided onboarding wizard
├── routes_misc.py ← models, usage, license, misc endpoints
├── db.py ← SQLite schema, all database helpers
├── state.py ← shared in-memory state, accessor functions
├── streaming.py ← broadcast helpers, WS send utilities
├── config.py ← constants, version, build metadata
├── env.py ← all os.environ reads (single source of truth)
├── mtls.py ← TLS + mTLS certificate handling
├── context.py ← conversation context assembly
├── memory_extract.py ← memory tag extraction and persistence
├── memory_search.py ← semantic search and recall
├── skills.py ← skill discovery and dispatch
├── tasks.py ← background task management
├── license.py ← license validation and trial gating
├── chat_html.py ← embedded web UI (chat SPA)
├── dashboard.py ← admin dashboard backend
├── dashboard_html.py ← admin dashboard UI
├── setup_html.py ← onboarding wizard UI
├── alert_client.py ← Telegram + push notification delivery
└── log.py ← logging
35 modules. No frameworks. No npm. No build step. The frontend is embedded in the Python server — python3 apex.py and everything runs.
Configuration Reference
| Variable | Default | Description |
|---|---|---|
APEX_HOST |
0.0.0.0 |
Bind address |
APEX_PORT |
8300 |
Port |
APEX_MODEL |
claude-sonnet-4-6 |
Default model for new chats |
APEX_WORKSPACE |
current dir | Working directory for AI tools |
APEX_SSL_CERT |
— | TLS certificate path |
APEX_SSL_KEY |
— | TLS private key path |
APEX_SSL_CA |
— | CA cert for mTLS client verification |
APEX_ENABLE_WHISPER |
false |
Enable memory whisper injection |
APEX_OLLAMA_URL |
http://localhost:11434 |
Ollama server address |
APEX_MLX_URL |
http://localhost:8400 |
MLX server address |
XAI_API_KEY |
— | xAI API key for Grok |
GOOGLE_API_KEY |
— | Google API key for embedding index |
Build Your Own Skills
Skills are directories with a SKILL.md file. Drop one in skills/, restart, and it's live.
skills/my-skill/
├── SKILL.md # Metadata + instructions (required)
├── run.sh # Executable entry point (optional)
├── feedback.log # User corrections (auto-generated)
└── metrics.json # Usage tracking (auto-generated)
Two types:
- Executable skills — have a
run.sh. Server executes it, passes results to the AI. - Thinking skills — no script. The AI reads instructions and follows them.
Risk tiers control execution:
| Tier | Behavior | Examples |
|---|---|---|
| 1 | Auto-approve | Read-only analysis, search, formatting |
| 2 | Notify | File modifications, new dependencies |
| 3 | Require approval | API calls, credential access, external writes |
Self-improving: /improve reads a skill's metrics and feedback, then proposes concrete changes.
Requirements
- Python 3.10+
- At least one model provider:
- Claude subscription (Pro/Max/Code), or
- ChatGPT subscription (Plus/Pro), or
- Ollama or MLX for free local inference, or
- xAI API key for Grok
- Mix and match per conversation.
Optional:
- Google API key for semantic search embeddings (free tier sufficient)
- Telegram bot token for mobile alert delivery
Is this a wrapper around the API?
No. Claude runs through the Agent SDK with full tool access (read, write, bash, search). Codex runs through the CLI with sandbox permissions. Local models get a custom tool-calling loop. It's closer to Claude Code than to a simple chat interface.
Can I use it on my phone?
The webapp works in mobile browsers for free. For a native experience with push notifications, background survival, and certificate-pinned security, Apex Pro starts at $29.99/mo ($249/yr or $499 lifetime). Android is in development.
What's free vs. paid?
The server, web app, all AI model integrations, memory system, skills engine, admin dashboard, and mTLS security are all free and open source. Native apps are Apex Pro — $29.99/mo, $249/yr, or $499 lifetime (first 500 units).
Can multiple people use one server?
The current architecture is single-user. Multi-user with RBAC is on the roadmap.
How much does it cost to run?
If you already pay for Claude and/or ChatGPT, Apex adds zero cost for those models. Local models are free. Only Grok requires a separate API key. Hosting is your own hardware.
What if I only want local models?
That works. Install Ollama, pull a model, start Apex. No API keys, no accounts, no internet needed. Full memory system, skills, and dashboard included.
Can I build my own skills?
Yes. Drop a directory with a
SKILL.md into skills/ and restart. Skills are auto-discovered, usage-tracked, and self-improving via the /improve meta-skill.
Getting Started · Personas · Groups · Contributing · Changelog · License
Elastic License 2.0 — free to use, modify, and self-host. Cannot be offered as a hosted service.


