LogChecker is a multi-agent AI system designed to automate Tier-1 SOC (Security Operations Center) analysis. It autonomously parses raw server logs, identifies attack patterns, and correlates findings with threat intelligence logic to generate actionable incident reports.
- Multi-Agent Architecture: Uses specialized AI agents ("Analyst" and "Researcher") that collaborate to solve complex security tasks.
- Automated Forensics: Detects brute-force attacks, unauthorized root access, and suspicious command execution.
- Context-Aware Reasoning: Distinguishes between internal (lateral movement) and external threats based on IP ranges.
- Local LLM Support: Optimized to run on private infrastructure (School/Enterprise Servers) via
LiteLLMandMistral.
The system utilizes the CrewAI framework to orchestrate the following workflow:
- Ingestion: Raw logs (SSH, Auth) are fed into the system.
- Agent 1: The Log Analyst:
- Role: Pattern recognition and anomaly detection.
- Output: A technical summary of the breach (timestamps, IPs, methods).
- Agent 2: The Threat Researcher:
- Role: Contextualization and severity assessment.
- Action: Takes the Analyst's findings, evaluates the IP reputation (Internal vs. External), and maps behaviors to potential intent.
- Reporting: Generates a comprehensive "Defense Brief" with remediation steps.
- Language: Python 3.13
- Orchestration: CrewAI (Agents, Tasks, Process)
- LLM Interface: LiteLLM (OpenAI-compatible protocol)
- Model: Mistral-Small-3.2-24B-Instruct (Hosted Locally)
LogChecker/
βββ src/
β βββ agents.py # Agent Definitions (Analyst & Researcher)
β βββ tasks.py # Task Instructions & Expected Outputs
β βββ main.py # Orchestration Entry Point
βββ data/ # Log samples and datasets
βββ .env # API Keys and Endpoint Configuration
βββ requirements.txt # Python dependencies