Transform cryptic log files into actionable insights with the power of AI
π Quick Start β’ π Documentation β’ π€ Models β’ π οΈ Contributing
- AI-powered error analysis - Get root cause analysis and actionable insights, on the fly
- Multiple AI providers - OpenAI, Anthropic, Google, local models via Ollama
- Smart caching - Avoid redundant API calls for similar logs
- Works with any log source - Pipes, files, Docker, Kubernetes, journalctl
pip install loglense# Analyze logs from any source
tail -f /var/log/app.log | loglense analyze
# Use with Docker logs
docker logs container_name | loglense analyze
# Analyze with specific model
kubectl logs pod-name | loglense analyze --model claude-sonnet
# Use local models
journalctl -u nginx | loglense analyze --model ollama-llama3# Interactive configuration
loglense configure
# View available models
loglense list-available-models
# Check your configuration
loglense show-configThe main command for log analysis. Reads from stdin and provides AI-powered insights.
# Basic analysis
cat error.log | loglense analyze
# Specify model
tail -100 app.log | loglense analyze --model gpt-4o
# Use custom API endpoint
cat debug.log | loglense analyze --api-base http://localhost:8000/v1 --model custom-model
# Skip cache
docker logs app | loglense analyze --no-cacheOptions:
--model, -m: Choose specific model (see supported models)--api-base: Custom API endpoint for OpenAI-compatible services--no-cache: Bypass cache for fresh analysis
Interactive setup for default model and API keys.
loglense configureπΈ Configuration Preview
π§ LogLense Configuration
Select Default Model
# Provider Model Description
1 OpenAI gpt-4o GPT-4 Optimized - Latest multimodal model
2 OpenAI gpt-4o-mini Smaller, faster GPT-4 for simple tasks
3 Anthropic opus-4 Anthropic's state-of-the-art flagship model
4 Anthropic sonnet-4 Balanced model with superior performance
...
Select default model [2]: 1
β
Default model set to: gpt-4o
API Key Configuration
This model requires: OPENAI_API_KEY
Enter your OPENAI_API_KEY: β’β’β’β’β’β’β’β’β’β’β’β’β’β’β’β’
β
API key saved for OPENAI_API_KEY
# View cache location and size
loglense cache path
loglense cache size
# Clear cache
loglense cache clear| Provider | Model Alias | Full ID | Best For |
|---|---|---|---|
| OpenAI | gpt-4o |
openai/gpt-4o | Complex log analysis, multi-step reasoning |
gpt-4o-mini |
openai/gpt-4o-mini | Fast analysis, simple error detection | |
| Anthropic | opus-4 |
anthropic/claude-opus-4-0 | Flagship model, comprehensive analysis |
sonnet-4 |
anthropic/claude-sonnet-4-0 | Balanced performance and speed | |
sonnet-3.7 |
anthropic/claude-3-7-sonnet | Advanced log pattern recognition | |
gemini-flash |
gemini/gemini-2.0-flash | Ultra-fast processing | |
gemini-pro |
gemini/gemini-2.5-pro | Advanced reasoning capabilities | |
| Local | ollama-llama3 |
ollama/llama3 | Privacy-focused, offline analysis |
ollama-gemma |
ollama/gemma | Lightweight local processing | |
| Other | mistral-large |
mistral/mistral-large-latest | European AI, GDPR compliant |
deepseek-chat |
deepseek/deepseek-chat | Cost-effective analysis |
| Provider | Environment Variable | How to Get |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
platform.openai.com |
| Anthropic | ANTHROPIC_API_KEY |
console.anthropic.com |
GEMINI_API_KEY |
aistudio.google.com | |
| Mistral | MISTRAL_API_KEY |
console.mistral.ai |
| DeepSeek | DEEPSEEK_API_KEY |
platform.deepseek.com |
| Ollama | None | Install locally |
tail -1000 /var/log/nginx/error.log | loglense analyzeπ Sample Output
π€ LogLense AI Analysis
## Likely Root Cause:
Database connection pool exhaustion causing cascading failures in the web application.
## Sequence of Events:
1. **14:23:15** - Initial database timeout errors begin
2. **14:23:45** - Connection pool reaches maximum capacity (500 connections)
3. **14:24:00** - Application starts rejecting new requests
4. **14:24:30** - Load balancer begins failing health checks
## Recommended Next Steps:
1. **Immediate**: Restart the database connection pool service
2. **Short-term**: Increase connection pool size from 500 to 750
3. **Long-term**: Implement connection pooling monitoring and alertskubectl logs deployment/api-server --tail=500 | loglense analyze --model claude-sonnetdocker logs --since=1h webapp-container | loglense analyze --model gemini-projournalctl -u myapp --since="1 hour ago" | loglense analyze --model gpt-4ograph TD
A[Log Stream] --> B[Stream Parser]
B --> C[Error Detection]
B --> D[Content Hashing]
C --> E[Cache Check]
D --> E
E --> F{Cache Hit?}
F -->|Yes| G[Return Cached Result]
F -->|No| H[AI Provider Selection]
H --> I[OpenAI]
H --> J[Anthropic]
H --> K[Google Gemini]
H --> L[Local/Ollama]
I --> M[AI Analysis]
J --> M
K --> M
L --> M
M --> N[Cache Result]
N --> O[Rich Output]
We love contributions! Here's how you can help make LogLense even better:
Found a bug or have an idea? Open an issue with:
- Clear description of the problem/feature
- Steps to reproduce (for bugs)
- Your environment details
- Sample log files (anonymized)
# Clone the repository
git clone https://github.com/yourusername/loglense.git
cd loglense
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e .
# Install development dependencies
pip install pytest black flake8 mypy
# Run tests
pytest
# Format code
black loglense_package/- Testing Framework: Add comprehensive test suite for reliability
- New AI Providers: Add support for additional AI services
- Documentation: More real-world examples and use cases
- Error Handling: Improve network timeout and rate limit handling
- Configuration: Enhanced config validation and migration
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Write tests for your changes
- Ensure your code follows Python best practices
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- API Keys: Stored locally in
~/.config/loglense/config.json - Log Data: Never stored permanently, only processed in memory
- Cache: Contains only AI analysis results, not raw logs
- Local Models: Full privacy with Ollama integration
This project is licensed under the MIT License - see the LICENSE file for details.
- Typer - For the amazing CLI framework
- Rich - For beautiful terminal output
- LiteLLM - For unified AI provider interfaces
