Skip to content

greenblade29/LogLense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ” LogLense

AI-Powered Log Analysis for the Command Line

Python 3.8+ License: MIT PyPI version Downloads

Transform cryptic log files into actionable insights with the power of AI

πŸš€ Quick Start β€’ πŸ“– Documentation β€’ πŸ€– Models β€’ πŸ› οΈ Contributing

LogLense


✨ Features

  • AI-powered error analysis - Get root cause analysis and actionable insights, on the fly
  • Multiple AI providers - OpenAI, Anthropic, Google, local models via Ollama
  • Smart caching - Avoid redundant API calls for similar logs
  • Works with any log source - Pipes, files, Docker, Kubernetes, journalctl

πŸš€ Quick Start

Installation

pip install loglense

Basic Usage

# Analyze logs from any source
tail -f /var/log/app.log | loglense analyze

# Use with Docker logs
docker logs container_name | loglense analyze

# Analyze with specific model
kubectl logs pod-name | loglense analyze --model claude-sonnet

# Use local models
journalctl -u nginx | loglense analyze --model ollama-llama3

First Time Setup

# Interactive configuration
loglense configure

# View available models
loglense list-available-models

# Check your configuration
loglense show-config

πŸ“– Usage

Core Commands

loglense analyze

The main command for log analysis. Reads from stdin and provides AI-powered insights.

# Basic analysis
cat error.log | loglense analyze

# Specify model
tail -100 app.log | loglense analyze --model gpt-4o

# Use custom API endpoint
cat debug.log | loglense analyze --api-base http://localhost:8000/v1 --model custom-model

# Skip cache
docker logs app | loglense analyze --no-cache

Options:

  • --model, -m: Choose specific model (see supported models)
  • --api-base: Custom API endpoint for OpenAI-compatible services
  • --no-cache: Bypass cache for fresh analysis

loglense configure

Interactive setup for default model and API keys.

loglense configure
πŸ“Έ Configuration Preview
πŸ”§ LogLense Configuration

Select Default Model

#    Provider     Model           Description
1    OpenAI       gpt-4o          GPT-4 Optimized - Latest multimodal model
2    OpenAI       gpt-4o-mini     Smaller, faster GPT-4 for simple tasks
3    Anthropic    opus-4          Anthropic's state-of-the-art flagship model
4    Anthropic    sonnet-4        Balanced model with superior performance
...

Select default model [2]: 1
βœ… Default model set to: gpt-4o

API Key Configuration
This model requires: OPENAI_API_KEY

Enter your OPENAI_API_KEY: β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’β€’
βœ… API key saved for OPENAI_API_KEY

Cache Management

# View cache location and size
loglense cache path
loglense cache size

# Clear cache
loglense cache clear

πŸ€– Supported Models

Provider Model Alias Full ID Best For
OpenAI gpt-4o openai/gpt-4o Complex log analysis, multi-step reasoning
gpt-4o-mini openai/gpt-4o-mini Fast analysis, simple error detection
Anthropic opus-4 anthropic/claude-opus-4-0 Flagship model, comprehensive analysis
sonnet-4 anthropic/claude-sonnet-4-0 Balanced performance and speed
sonnet-3.7 anthropic/claude-3-7-sonnet Advanced log pattern recognition
Google gemini-flash gemini/gemini-2.0-flash Ultra-fast processing
gemini-pro gemini/gemini-2.5-pro Advanced reasoning capabilities
Local ollama-llama3 ollama/llama3 Privacy-focused, offline analysis
ollama-gemma ollama/gemma Lightweight local processing
Other mistral-large mistral/mistral-large-latest European AI, GDPR compliant
deepseek-chat deepseek/deepseek-chat Cost-effective analysis

API Key Requirements

Provider Environment Variable How to Get
OpenAI OPENAI_API_KEY platform.openai.com
Anthropic ANTHROPIC_API_KEY console.anthropic.com
Google GEMINI_API_KEY aistudio.google.com
Mistral MISTRAL_API_KEY console.mistral.ai
DeepSeek DEEPSEEK_API_KEY platform.deepseek.com
Ollama None Install locally

🎯 Real-World Examples

Web Server Error Analysis

tail -1000 /var/log/nginx/error.log | loglense analyze
πŸ“Š Sample Output
πŸ€– LogLense AI Analysis

## Likely Root Cause:
Database connection pool exhaustion causing cascading failures in the web application.

## Sequence of Events:
1. **14:23:15** - Initial database timeout errors begin
2. **14:23:45** - Connection pool reaches maximum capacity (500 connections)
3. **14:24:00** - Application starts rejecting new requests
4. **14:24:30** - Load balancer begins failing health checks

## Recommended Next Steps:
1. **Immediate**: Restart the database connection pool service
2. **Short-term**: Increase connection pool size from 500 to 750
3. **Long-term**: Implement connection pooling monitoring and alerts

Kubernetes Pod Debugging

kubectl logs deployment/api-server --tail=500 | loglense analyze --model claude-sonnet

Docker Container Analysis

docker logs --since=1h webapp-container | loglense analyze --model gemini-pro

Application Performance Issues

journalctl -u myapp --since="1 hour ago" | loglense analyze --model gpt-4o

πŸ—οΈ Architecture

graph TD
    A[Log Stream] --> B[Stream Parser]
    B --> C[Error Detection]
    B --> D[Content Hashing]
    C --> E[Cache Check]
    D --> E
    E --> F{Cache Hit?}
    F -->|Yes| G[Return Cached Result]
    F -->|No| H[AI Provider Selection]
    H --> I[OpenAI]
    H --> J[Anthropic]
    H --> K[Google Gemini]
    H --> L[Local/Ollama]
    I --> M[AI Analysis]
    J --> M
    K --> M
    L --> M
    M --> N[Cache Result]
    N --> O[Rich Output]
Loading

πŸ› οΈ Contributing

We love contributions! Here's how you can help make LogLense even better:

πŸ› Bug Reports & Feature Requests

Found a bug or have an idea? Open an issue with:

  • Clear description of the problem/feature
  • Steps to reproduce (for bugs)
  • Your environment details
  • Sample log files (anonymized)

πŸ”§ Development Setup

# Clone the repository
git clone https://github.com/yourusername/loglense.git
cd loglense

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e .

# Install development dependencies
pip install pytest black flake8 mypy

# Run tests
pytest

# Format code
black loglense_package/

🎯 Areas We Need Help With

  • Testing Framework: Add comprehensive test suite for reliability
  • New AI Providers: Add support for additional AI services
  • Documentation: More real-world examples and use cases
  • Error Handling: Improve network timeout and rate limit handling
  • Configuration: Enhanced config validation and migration

πŸ“ Pull Request Guidelines

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Write tests for your changes
  4. Ensure your code follows Python best practices
  5. Commit your changes (git commit -m 'Add amazing feature')
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request

πŸ”’ Security & Privacy

  • API Keys: Stored locally in ~/.config/loglense/config.json
  • Log Data: Never stored permanently, only processed in memory
  • Cache: Contains only AI analysis results, not raw logs
  • Local Models: Full privacy with Ollama integration

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ™ Acknowledgments

  • Typer - For the amazing CLI framework
  • Rich - For beautiful terminal output
  • LiteLLM - For unified AI provider interfaces

🌟 Star History

Star History Chart

Purr-fect for debugging - because every log has nine lives 🐱

⬆ Back to Top