A lightweight system monitoring tool written in Rust.
Ferrview is a modular system monitoring utility with a client-server architecture:
- ferrview-node: Agent that runs on each monitored machine, collecting system metrics via configurable probes
- ferrview-collector: Central server that receives metrics from nodes, stores time-series data, and provides a web dashboard
The node agent uses the sysinfo crate to gather comprehensive system data and sends it to the collector via HTTP.
Ferrview includes a web dashboard for visualizing metrics from all monitored nodes.
- CPU Monitoring: Core count, frequency, and individual core information
- Memory Analysis: RAM usage, swap usage, and percentage calculations
- Temperature Sensing: Hardware temperature readings with critical thresholds
- Disk Monitoring: Disk usage and capacity per mount point
- Network Monitoring: Interface traffic statistics (bytes sent/received)
- Process Monitoring: Fork rate tracking via /proc/stat (Linux)
- Configurable Probes: Enable/disable specific monitoring modules via TOML configuration
- Structured Logging: JSON-formatted output with timestamps for easy parsing
- Lightweight: Minimal dependencies and optimized binary size
- Rust toolchain (stable edition 2024 or later)
- Cargo package manager
# Clone the repository
git clone <repository-url>
cd ferrview
# Build the project
cargo build --release
# Binaries will be available at:
# - target/release/ferrview-node
# - target/release/ferrview-collectorThe node agent uses ferrview-node.toml to configure probes and collector address:
node_id = "uuid or similar (string)"
metrics_collector_addr = "hostname:port (ip address is ok too)"
[probes.sysinfo]
static_info = true # System static information
cpu = true # CPU core information
memory = true # RAM and swap usage
disk = true # Disk information
network = true # Network interface data
temperature = true # Hardware temperature sensors
[probes.procfs]
forks = true # Process creation monitoring (Linux only)The collector is configured via command-line arguments:
ferrview-collector -l 0.0.0.0 -p 8080 -d /path/to/data/-l: Listen address (default: 0.0.0.0)-p: Port (default: 8080)-d: Data directory for SQLite databases
Start the collector server first (typically on a central machine):
ferrview-collector -l 0.0.0.0 -p 8080 -d ./data/
# Or using cargo
cargo run -p ferrview-collector -- -l 0.0.0.0 -p 8080 -d ./data/The web dashboard will be available at http://localhost:8080.
Run the node agent on each machine you want to monitor:
ferrview-node --config-file /path/to/ferrview-node.toml
# Or using cargo
cargo run -p ferrview-node -- --config-file ferrview-node/ferrview-node.tomlThe node agent outputs structured logs with UTC timestamps:
2024-01-01T12:00:00Z INFO Starting ferrview-node
2024-01-01T12:00:00Z INFO Starting CPU probe
2024-01-01T12:00:00Z INFO Detected 8 CPU cores
2024-01-01T12:00:00Z INFO Memory information total_memory_bytes=17179869184 used_memory_bytes=8589934592 memory_usage_percent="50.0"
ferrview/
├── ferrview-common/ # Shared library and data structures
│ └── src/lib.rs
├── ferrview-node/ # Node agent
│ ├── src/
│ │ ├── main.rs # Main entry point
│ │ ├── client/ # HTTP client for sending metrics
│ │ ├── probes/ # Monitoring probes
│ │ │ ├── sysinfo/ # System information probes
│ │ │ │ ├── cpu.rs # CPU monitoring
│ │ │ │ ├── mem.rs # Memory monitoring
│ │ │ │ ├── disk.rs # Disk monitoring
│ │ │ │ ├── network.rs # Network monitoring
│ │ │ │ ├── temp.rs # Temperature monitoring
│ │ │ │ └── statik.rs # Static system info
│ │ │ └── procfs/ # Linux /proc filesystem probes
│ │ │ └── forks.rs # Process fork monitoring
│ │ ├── config.rs # Configuration loading
│ │ └── utils/ # Utility functions
│ └── ferrview-node.toml # Example configuration
├── ferrview-collector/ # Collector server
│ ├── src/
│ │ ├── main.rs # Main entry point
│ │ ├── http/ # HTTP server and API
│ │ ├── store/ # Data storage
│ │ └── charts/ # Chart rendering
│ └── ferrview-collector.toml # Example configuration
├── Cargo.toml # Workspace configuration
└── README.md # Project documentation
sysinfo: System information gatheringserde: Configuration serialization/deserializationtoml: Configuration file parsingtracing: Structured logging frameworktime: Timestamp formatting
This project uses mise for task management.
mise tasks --all
build-rust-dev [DEV] cargo build
build-rust-release [RELEASE] cargo build --release
check Cargo check
lint Lint with Clippy, failing on warnings
mise run build-rust-devmise run build-rust-releasemise run checkmise run lintcargo testcargo fmtThe release build is optimized for minimal binary size with:
- Thin LTO (Link Time Optimization)
- Panic abort for smaller binaries
- Symbol stripping with separate debug info
- Single codegen unit for optimal optimization
- Create a new module in
ferrview-node/src/probes/ - Implement probe functions following the existing pattern
- Add configuration options in
ferrview-node/src/config.rs - Register the probe in the main execution flow
pub fn probe_example(sys: &System) {
use tracing::info;
info!("Starting example probe");
// Probe implementation
}MIT
Contributions are welcome! Please feel free to submit pull requests or open issues for:
- New probe implementations
- Configuration enhancements
- Performance improvements
- Documentation updates
- Disk usage monitoring ✓
- Network interface statistics ✓
- Process monitoring (forks) ✓
- GPU information (where available)
- Battery status (for laptops)
- Collector HTTP API ✓
- Storing metrics as time series ✓
- Basic web dashboard with charting ✓
- Advanced query interface
- Data aggregation and rollups
- Alerting and notifications
- Cloud backup functionality
For issues, questions, or feature requests, please open an issue on the project repository.

