Caution
The project is currently under development.
Raven is a lightweight, self-hosted server monitoring and centralized logging tool. Drop a tiny agent on your Linux machines and get live system metrics, searchable logs, and automated alerts — all from one clean dashboard, without the expensive SaaS fees.
When you run multiple servers, SSHing into each one to check logs or run htop during an outage takes too much time. You need to know immediately if a server runs out of memory or an application crashes. There's no single place to see what's happening across all your machines.
Raven gives you a simple, self-hostable alternative to heavy enterprise monitoring tools. You install a tiny Rust agent on each machine you want to monitor. The agent quietly collects CPU, memory, disk, and network stats, and tails your application log files. It streams everything back to a central server in batches, where you get:
- Live dashboard: Real-time charts for CPU, memory, disk I/O, network, and load average across all your servers
- Centralized logs: Search across all your application logs in one place with full-text search, filter by host, app, time range, or stream (stdout/stderr)
- Live log tailing: Watch logs in real time from your browser, with color-coded stderr and pause/resume
- Automated alerts: Set threshold rules (e.g. CPU > 90% for 5 minutes) and get notified via Discord, Slack, or email
- Multi-host support: One central server, unlimited agents. Each agent connects outbound - no ports to open on monitored servers
- Docker & PM2 logs: Tail Docker container logs and PM2-managed Node.js app logs with zero application code changes
- Offline resilience: Agents buffer data locally when the server is unreachable and replay it on reconnect
1. Deploy the central server:
todo!()2. Install the agent on any server you want to monitor:
todo!()Metrics start flowing within 10 seconds. Add log file paths to /etc/raven/agent.toml and restart the agent to start collecting logs.
- The agent reads system metrics from
/procevery 10 seconds and watches log files for new lines using kernel-level notifications - Data is batched and streamed to the central server over gRPC with TLS encryption
- The server stores metrics in VictoriaMetrics and logs in ClickHouse
- The dashboard queries the server's API for charts, log search, and live tailing via WebSocket
- An alert engine evaluates threshold rules every 30 seconds and sends notifications on state changes
See the technical documentation for the architecture overview, tech stack, component design, setup flow, security model, and deployment strategy.
See the implementation specification for detailed design, implementation phases, user stories, testing strategy, and key decisions.
The project is published under: MIT LICENSE
