This is an n8n workflow that automates daily server infrastructure reporting. It aggregates logs from Docker containers and system files, uses AI (Ollama) to analyze them for errors, and emails a formatted HTML report.
- LogAI API (Python): Custom fetcher for Docker and System logs. A custom FastAPI wrapper that reads Docker logs via socket and System logs via file mount. It also acts as an SMTP proxy to simplify TLS handling for n8n.
- n8n: Workflow automation engine. Orchestrates the flow: Fetch Logs -> Embed -> Check History -> Analyze -> Email Report.
- Ollama: Local LLM inference - running
qwen2.5:14bmodel to analyze logs "intelligently" rather than just keyword searching. - Qdrant: Vector database for storing historical log error patterns. Stores embeddings of past errors. The AI checks this to tell you if an error is "Recurring" or "New".
- Hybrid Collection: Fetches logs from Docker containers and System files (
/var/log/syslog). - AI Analysis: Uses local LLM (Qdrant + Ollama) to identify root causes and filter noise.
- Vector Memory: Checks Qdrant to see if an error has happened before.
- Smart Reporting: Generates a color-coded HTML dashboard sent via Email.
- Docker & Docker Compose installed.
- n8n (Self-hosted).
- Ollama running
qwen2.5:14bandnomic-embed-text. - Qdrant (Vector Database).
- SSH Access to the target server (for system logs).
- A server with resources to run an LLM (16GB+ RAM recommended for
qwen2.5:14b). - SMTP credentials (or a local Mailcow instance).
-
Clone the repository:
git clone [https://github.com/Gabsthejerk/LogAI.git](https://github.com/Gabsthejerk/LogAI.git) cd LogAI -
Configure the Environment: Create a
.envfile in the root directory to define your mail server settings.# Create .env from example or scratch echo "MAILCOW_IP=192.168.1.100" >> .env echo "MAILCOW_PORT=465" >> .env
Replace the IP and Port with your actual SMTP server details.
-
Start the Stack:
docker-compose up -d --build
-
Download LLM Models: Once the containers are running, pull the necessary models into Ollama:
docker exec -it ollama ollama pull qwen2.5:14b docker exec -it ollama ollama pull nomic-embed-text
- Open n8n at
http://localhost:5678. - Create a new workflow.
- Click the three dots (top right) -> Import from File.
- Select
workflow/LogAI.jsonfrom this repository. - Configure Credentials:
You will need to create credentials in n8n for:
- Ollama: Base URL
http://ollama:11434 - Qdrant: Base URL
http://qdrant:6333 - SSH: Access to your host (user/pass or key) to read system logs.
- SMTP: Your email sending credentials.
- Ollama: Base URL
To prevent LogAI from analyzing certain containers (like itself), edit api/log_analyzer.py:
# api/log_analyzer.py
IGNORE_CONTAINERS = ["LogAI", "ollama", "qdrant", "your-other-container"]