Caesar is an AI agent platform built in Julia that uses a persistent Julia REPL as its primary tool. Instead of shelling out to Bash, agents evaluate Julia expressions step-by-step, keeping variables in scope across interactions.
The REPL is shared — you can work in it alongside your agents. Define a function, then ask an agent to write tests for it by name. No copy-pasting needed since agents can introspect the same module scope you're working in.
Using Julia instead of Bash for tool calls means smaller contexts, fewer tokens, and code that's easier to validate statically. The interpreter runs every expression through a safety system that checks filesystem paths, blocks eval/ENV mutation, and prompts for approval on unknown operations.
| Feature | Description |
|---|---|
| Julia REPL | Sandboxed interpreter with safety validation and REPL-style soft scope |
| Multi-agent | Create agents with distinct personalities, instructions, and skills. Agents can hand off tasks to each other |
| Pluggable memory | Ori (graph-aware markdown vault) or Hindsight (REST API with auto-extraction) — per-agent |
| Tools | git_branch_and_pr, web_search, email — extensible via ~/Caesar/tools/ |
| Skills | Markdown-defined prompts activated by /name — extensible via ~/Caesar/skills/ |
| Commands | CLI commands via /name — model switching, plugin management |
| Interfaces | TUI (tui.jl) with chat + live REPL log pane, CLI (cli.jl), Telegram gateway |
| LLM support | Ollama, OpenAI, Anthropic, Google, Mistral, DeepSeek, xAI via PromptingTools.jl |
- Install Kip
- Clone this repo
- Run
julia tui.jlfor the TUI orjulia cli.jlfor the CLI - Configuration lives in
~/Caesar/config.yaml(created on first run)
agent = Agent("Pliny", "You are a concise research assistant.", "Summarize papers. Cite sources.")Only id, personality, and instructions are required — the rest have defaults:
| Keyword | Default |
|---|---|
skills |
Dict{String, Skill}() |
path |
HOME * "agents" * id |
repl_module |
Module(Symbol("agent_$id")) |
repl_log |
opens agents/<id>/repl.log |
config |
Dict{String, Any}() |
Then talk to it:
promise = message(agent, "Summarize the latest paper on transformer architectures")
# do other work...
reply = need(promise)Agents on disk (agents/<id>/ with soul.md and instructions.md) are loaded automatically on startup by load_agents!(). create_agent!("name", "description") scaffolds the directory and uses the LLM to generate the personality and instructions.
