Skip to content

AdeilH/sadbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SadBot

A melancholy robot that discusses existentialism. Built with Go (backend) and plain HTML/CSS/JS (frontend). Optionally powered by a local LLM via any OpenAI-compatible inference server.

Quick Start

# Build and run with canned responses
make run

# Open http://localhost:8080

LLM Mode

SadBot can use a local LLM for AI-powered existential conversations. When the LLM is unavailable, it falls back to built-in canned responses.

Supported Servers

Any server exposing /v1/chat/completions (OpenAI-compatible):

Server Default URL Make Target
Ollama http://localhost:11434 make run-ollama
llama.cpp http://localhost:8080 make run-llama
LM Studio http://localhost:1234 make run-lmstudio
vLLM http://localhost:8000 make run-vllm

Usage

# Ollama with llama3
ollama run llama3  # in another terminal
make run-ollama

# Ollama with a different model
make run-ollama MODEL=mistral

# llama.cpp (runs SadBot on port 3000 to avoid conflict)
./llama-server -m model.gguf  # in another terminal
make run-llama

# Custom server
make run-llm LLM_URL=http://localhost:5000 MODEL=my-model

# Full manual control
./sadbot \
  -llm-url=http://localhost:11434 \
  -llm-model=llama3 \
  -llm-temperature=0.9 \
  -llm-max-tokens=256 \
  -llm-timeout=30s \
  -port=3000

CLI Flags

Flag Default Description
-port 8080 Server port
-llm-url (none) Base URL of the inference server
-llm-model (none) Model name (required for Ollama)
-llm-api-key (none) API key (most local servers don't need this)
-llm-temperature 0.8 Temperature (0.0-2.0)
-llm-max-tokens 512 Max tokens per response
-llm-timeout 60s Timeout for LLM requests

API

Endpoint Method Description
/api/chat POST Send a message, get a response
/api/status GET Server status and LLM info
/api/topics GET Available existential topics

POST /api/chat

// Request
{ "session_id": "abc123", "message": "Is life absurd?" }

// Response
{
  "text": "Camus would say yes...",
  "mood": "melancholy",
  "turn_count": 3,
  "topics_visited": ["absurdism"],
  "robot_face": ":/",
  "llm_powered": true
}

Development

make build        # Build binary
make test         # Run tests
make test-race    # Run tests with race detector
make vet          # Run go vet
make clean        # Remove binary

Project Structure

cmd/server/main.go           Server entry point, CLI flags, graceful shutdown
internal/robot/sadbot.go      Dialogue engine, mood system, session management
internal/handlers/handlers.go HTTP handlers, rate limiting, input validation
internal/llm/client.go        OpenAI-compatible LLM client with retries
internal/llm/prompt.go        SadBot personality system prompt
static/index.html             Frontend (animated robot, chat UI, starfield)

Tests

60 tests total, all passing with -race:

  • internal/robot: 14 tests (dialogue, mood, sessions, concurrency)
  • internal/handlers: 11 tests (API, rate limiting, validation)
  • internal/llm: 35 tests (client, retries, health checks, concurrency)

Zero Dependencies

The entire project uses only the Go standard library. No external modules.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors