A melancholy robot that discusses existentialism. Built with Go (backend) and plain HTML/CSS/JS (frontend). Optionally powered by a local LLM via any OpenAI-compatible inference server.
# Build and run with canned responses
make run
# Open http://localhost:8080SadBot can use a local LLM for AI-powered existential conversations. When the LLM is unavailable, it falls back to built-in canned responses.
Any server exposing /v1/chat/completions (OpenAI-compatible):
| Server | Default URL | Make Target |
|---|---|---|
| Ollama | http://localhost:11434 |
make run-ollama |
| llama.cpp | http://localhost:8080 |
make run-llama |
| LM Studio | http://localhost:1234 |
make run-lmstudio |
| vLLM | http://localhost:8000 |
make run-vllm |
# Ollama with llama3
ollama run llama3 # in another terminal
make run-ollama
# Ollama with a different model
make run-ollama MODEL=mistral
# llama.cpp (runs SadBot on port 3000 to avoid conflict)
./llama-server -m model.gguf # in another terminal
make run-llama
# Custom server
make run-llm LLM_URL=http://localhost:5000 MODEL=my-model
# Full manual control
./sadbot \
-llm-url=http://localhost:11434 \
-llm-model=llama3 \
-llm-temperature=0.9 \
-llm-max-tokens=256 \
-llm-timeout=30s \
-port=3000| Flag | Default | Description |
|---|---|---|
-port |
8080 |
Server port |
-llm-url |
(none) | Base URL of the inference server |
-llm-model |
(none) | Model name (required for Ollama) |
-llm-api-key |
(none) | API key (most local servers don't need this) |
-llm-temperature |
0.8 |
Temperature (0.0-2.0) |
-llm-max-tokens |
512 |
Max tokens per response |
-llm-timeout |
60s |
Timeout for LLM requests |
| Endpoint | Method | Description |
|---|---|---|
/api/chat |
POST | Send a message, get a response |
/api/status |
GET | Server status and LLM info |
/api/topics |
GET | Available existential topics |
// Request
{ "session_id": "abc123", "message": "Is life absurd?" }
// Response
{
"text": "Camus would say yes...",
"mood": "melancholy",
"turn_count": 3,
"topics_visited": ["absurdism"],
"robot_face": ":/",
"llm_powered": true
}make build # Build binary
make test # Run tests
make test-race # Run tests with race detector
make vet # Run go vet
make clean # Remove binarycmd/server/main.go Server entry point, CLI flags, graceful shutdown
internal/robot/sadbot.go Dialogue engine, mood system, session management
internal/handlers/handlers.go HTTP handlers, rate limiting, input validation
internal/llm/client.go OpenAI-compatible LLM client with retries
internal/llm/prompt.go SadBot personality system prompt
static/index.html Frontend (animated robot, chat UI, starfield)
60 tests total, all passing with -race:
internal/robot: 14 tests (dialogue, mood, sessions, concurrency)internal/handlers: 11 tests (API, rate limiting, validation)internal/llm: 35 tests (client, retries, health checks, concurrency)
The entire project uses only the Go standard library. No external modules.