Ramble an idea at your bot. Get a polished GitHub issue back.
Capture ideas before they disappear. blurt takes a rough message β a shower thought, a half-formed feature idea, a bug you noticed β and turns it into a structured GitHub issue, automatically.
No annoying forms. No more messy Notes. Just a message to your bot.
- Discord + Telegram bots β send ideas from wherever you already are
- Multiple LLM providers β Gemini, OpenAI, Anthropic, or local Ollama
- Auto-structured output β title, summary, and tags generated from your raw input
- GitHub issue creation β issues land directly in your repo
- Self-hostable β runs with a single
docker compose up -d - Minimal friction β no UI, no accounts, just a bot message
You send:
what if the rocket could like, figure out on its own when to abort the landing burn, based on terrain or whatever, instead of us hardcoding the altitude threshold. some kind of adaptive thing. probably hard idk
blurt creates:
- Title:
Adaptive landing burn abort trigger based on terrain sensing - Summary: Instead of a hardcoded altitude threshold, the landing burn abort logic should dynamically adjust based on real-time terrain data. This would allow the system to respond to uneven or unexpected ground conditions rather than relying on a fixed value.
- Tags:
landing,autonomy,guidance - Issue:
https://github.com/you/rocketship/issues/42
- Send a rough idea to your Discord or Telegram bot
- blurt passes it to your configured LLM, which generates a title, summary, and tags
- A GitHub issue is created in your repo
- The bot replies with the issue title and link
You'll need:
- Docker + Docker Compose
- A GitHub repo and a personal access token with Issues write permission
- An LLM API key β Gemini recommended (free tier available)
- A Discord bot token (Discord Developer Portal) or a Telegram bot token (@BotFather)
Run the quickstart:
curl -fsSL https://raw.githubusercontent.com/sholgaat/blurt/main/quickstart.sh -o /tmp/blurt-setup.sh && bash /tmp/blurt-setup.shThe script collects your credentials and tells you when to run docker compose up -d.
Prefer to inspect first:
curl -fsSL https://raw.githubusercontent.com/sholgaat/blurt/main/quickstart.sh -o blurt-setup.sh
# read blurt-setup.sh
bash blurt-setup.shBy default, Docker pulls pre-built images. To build locally instead, before running docker compose up -d:
mv docker-compose.override.yml.disabled docker-compose.override.ymlDon't trust Big LLM with your ideas? No problem β bring your own.
Run the quickstart and select Ollama as your LLM provider when prompted. You'll be asked for the Ollama base URL β you can either:
- Already have Ollama running? Provide the URL (e.g.,
http://192.168.1.100:11434) - Want to set it up with blurt? Download the Ollama compose file and include it when running:
curl -fsSL https://raw.githubusercontent.com/sholgaat/blurt/main/docker-compose.ollama.yml -o docker-compose.ollama.yml
docker compose -f docker-compose.yml -f docker-compose.ollama.yml up -d
docker compose exec ollama ollama pull phi3:miniThe Docker network will make Ollama accessible at http://ollama:11434 from the backend.
Model data is persisted in the ollama-data volume across restarts.