Because grep deserves a brain.
AI-powered log analysis using local Ollama LLMs. Built for sysadmins who want accurate insights without false positives or cloud dependencies.
# 1. Start Ollama
docker compose up -d
# 2. Install logllama
sudo install -m 755 <(curl -fsSL https://raw.githubusercontent.com/ryslab/logllama/main/logllama.sh) /usr/local/bin/logllama
# 3. Try it
journalctl --since "10 min ago" | tail -50 | logllamaThat's it.
Because you shouldn't accidentally send production secrets to the cloud:
# 😱 This could leak to OpenAI
$ cat docker-compose.yml | chatgpt
"Here's your analysis... (and your AWS_ACCESS_KEY_ID is now in their logs)"
# ✅ This stays 100% local
$ cat docker-compose.yml | logllama
"Configuration analysis completed locally. No data sent externally."Because reading logs is tedious:
# 😴 Traditional approach
$ journalctl -u nginx --since "1 hour ago" | grep -E "(error|fail|critical)" | less
# 15 minutes later... "What does 'upstream timed out' actually mean?"
# ⚡ LogLlama approach
$ journalctl -u nginx --since "1 hour ago" | logllama
"Upstream timeouts to 10.0.1.23:3000 - check if backend service is healthy"Because AI hallucinations waste hours:
# 🤦 Other AI tools invent problems
$ systemctl status sshd | some-ai-tool
"🚨 CRITICAL: Missing healthcheck configuration"
# ...but systemd services don't HAVE healthchecks!
# 🎯 LogLlama knows context
$ systemctl status sshd | logllama
"✅ HEALTHY - Service 'active (running)' with PID 1234"Because correlation is everything:
# 🔗 Connect logs with configs in one command
$ journalctl -u nginx | logllama /etc/nginx/nginx.conf --query "502 errors"
"Port mismatch: nginx.conf proxies to :8080 but service listens on :3000"Because your time is valuable:
- 5 minutes reading cryptic error codes vs.
- 5 seconds getting the root cause and fix
LogLlama: Get answers, not more questions. 🎯
LogLlama aggressively prevents false positives:
- "Healthy by default" - assumes systems work unless proven otherwise
- Content-aware analysis - knows Docker configs vs system logs vs Python tracebacks
- Clear evidence required - won't invent problems from missing optional features
- CompTIA methodology - structured troubleshooting (Identify → Theory → Test)
Stops common AI hallucinations:
- ❌ "Missing healthcheck" in valid docker-compose files
- ❌ "Service problems" from "active (running)" status
- ❌ "Hardware issues" from normal dmesg driver loads
- ❌ "Configuration errors" from minimal but valid configs
- 🎯 Accurate - Anti-hallucination system prevents false positives
- 🔒 100% Local - Your logs never leave your machine
- 🎪 Smart - Content-aware analysis (Docker, systemd, Python, Nginx, etc.)
- 🔍 Multi-Source - Correlate logs + config files for root cause analysis
- ⚡ Actionable - Specific commands and file paths, not vague suggestions
- 🎛️ Flexible - Brief summaries to deep forensic analysis
- 🌐 Universal - Linux, macOS, WSL, BSD. If it has bash, it works
# Download and install in one command
sudo install -m 755 <(curl -fsSL https://raw.githubusercontent.com/ryslab/logllama/main/logllama.sh) /usr/local/bin/logllama
# Verify
logllama --help# Download and review the script first
curl -fsSL -o logllama.sh https://raw.githubusercontent.com/ryslab/logllama/main/logllama.sh
# Inspect it (always good practice!)
less logllama.sh
# Install with proper permissions
sudo install -m 755 logllama.sh /usr/local/bin/logllama
# Clean up
rm logllama.sh# Download to temp file, review, then install
curl -fsSL https://raw.githubusercontent.com/ryslab/logllama/main/logllama.sh -o /tmp/logllama.sh && \
less /tmp/logllama.sh && \
sudo install -m 755 /tmp/logllama.sh /usr/local/bin/logllama && \
rm /tmp/logllama.shgit clone https://github.com/ryslab/logllama.git
cd logllama
# Review the script
less logllama.sh
# Install with proper permissions
sudo install -m 755 logllama.sh /usr/local/bin/logllamaSecurity Benefits:
- Transparency - You can review the script before installation
- Audit Trail - File persists for later inspection
- Proper Permissions -
installsets correct ownership and modes - No Pipe Risks - Avoids potential pipe interception issues
The easiest way to run Ollama with GPU support:
# Clone repo (includes docker-compose.yml)
git clone https://github.com/ryslab/logllama.git
cd logllama
# Start Ollama with GPU support
docker compose up -d
# Wait for models to download (first time only)
docker compose logs -f ollama-warmup
# Test it works
curl -s http://localhost:11434/api/tags | jq '.models[].name'# Service logs
journalctl -xeu nginx | logllama
# Kernel errors
dmesg --level=err | logllama
# Live log monitoring
tail -f /var/log/nginx/error.log | logllama# Correlate service logs with config files
journalctl -xeu nginx | logllama /etc/nginx/nginx.conf
# Docker logs + compose file analysis
docker logs myapp | logllama docker-compose.yml --query "startup issues"
# Multiple log files for correlation
logllama /var/log/nginx/access.log /var/log/nginx/error.log --query "404 errors"# Brief summary
journalctl -u ssh | logllama --brief
# Detailed analysis
dmesg | logllama --verbose
# Deep forensic analysis with causal chains
journalctl --since "1 hour ago" | logllama --deep
# Custom context
docker logs myapp | logllama --query "database connection timeouts"| Variable | Default | Description |
|---|---|---|
MODEL |
mistral |
Ollama model to use |
OLLAMA_URL |
http://localhost:11434 |
Ollama API endpoint |
MAX_BYTES |
131072 (128 KiB) |
Max input size before truncation |
TIMEOUT |
60 seconds |
Request timeout |
WORD_CAP |
120 words |
Response length limit |
MODEL=llama3 OLLAMA_URL=http://localhost:11434 logllama < /var/log/sysloglogllama --help
# --brief, --verbose, --deep, --debug # Output detail levels
# --cap=200 # Custom word limit
# --query "context" # Focus analysis
# --name=filename # Hint for content detection
# --type=journald|docker-compose|python # Force content type
# --dry-run # Show prompt without API call$ uname -r | logllama
✅ HEALTHY - No issues detected
Kernel version 6.17.6-2-cachyos represents normal system information, not an error.$ dmesg | grep -i usb | head -5 | logllama
🔍 INSUFFICIENT EVIDENCE
USB device resets may be normal operation. For diagnostics: dmesg --level=err$ python3 -c "import json; json.loads('invalid')" 2>&1 | logllama
🚨 GENUINE ISSUE:
1) Problem: JSONDecodeError - Expecting value line 1 column 1
2) Theory: Invalid JSON syntax in input string
3) Test: Validate JSON with: python3 -m json.tool < file.json- Ollama running locally or remotely
- Models - at least one model pulled (default:
mistral) - Dependencies:
# Debian/Ubuntu
sudo apt install curl jq
# macOS
brew install curl jq
# Arch Linux
sudo pacman -S curl jqQ: How is this different from other AI log tools? A: Most AI tools are trained to "find problems" and invent issues. LogLlama assumes systems are healthy unless it sees clear, objective evidence of failures.
Q: What content types does it recognize? A: Systemd journals, Docker Compose, Python tracebacks, Nginx/Apache errors, dmesg, JSON, YAML, shell scripts, and more.
Q: Can I use my own Ollama models?
A: Yes! Set MODEL=your-model-name to use any Ollama-compatible model.
Q: Is there a rate limit? A: Only what your local Ollama instance can handle. All processing is local.
Ollama not reachable:
# Check if Ollama is running
docker compose ps
curl http://localhost:11434/api/tagsModel not found:
# List available models
docker exec ollama ollama list
# Pull a model
docker exec ollama ollama pull mistralSlow responses:
- Try smaller models:
MODEL=phi logllama - Increase timeout:
TIMEOUT=120 logllama - Use brief mode:
logllama --brief
# Aliases for daily use
alias jl="journalctl -xeu"
alias ai="logllama"
alias aid="logllama --deep"
# Quick service checks
jl nginx.service | ai
jl docker.service --since "1 hour ago" | ai --query "container crashes"
# Multi-service correlation
{ jl nginx.service; jl php-fpm.service; } | ai --query "web stack issues"| Code | Meaning |
|---|---|
| 0 | Success |
| 64 | Usage error (bad input/flags) |
| 65 | Data error (malformed API response) |
| 69 | Service unavailable (Ollama unreachable) |
| 127 | Missing dependency (curl/jq not found) |
Found a bug? Have an idea? Open an issue!
We welcome:
- New content type detectors
- Documentation improvements
- Bug fixes and performance optimizations
- Additional examples and use cases
MIT License © 2025 Ryan Nisly
Free to use, modify, and distribute. See LICENSE for details.
If LogLlama saves you time:
- ⭐ Star the repo
- 🐛 Report bugs
- 💡 Share your use cases
- 🔄 Tell fellow sysadmins
Because your time is better spent fixing real problems, not chasing AI hallucinations. 🎯