Skip to content

GnuJason/F1

Repository files navigation

Pit Wall Telemetry Dashboard

A Formula 1 pit wall inspired real-time system monitoring dashboard. FastAPI streams live host metrics over WebSockets every second, while a React + D3/Chart.js frontend renders tachometers, fuel gauges, and warning lights reminiscent of a race engineer's console.


📸 Screenshots

Live captures from the telemetry dashboard showing real-time system monitoring panels, gauges, and diagnostics.

Tachometer & Fuel Gauge

CPU Tachometer — D3 arc gauge displaying total CPU usage with color-coded zones from green through yellow to red, needle indicating current load percentage

CPU Tachometer — overall CPU load rendered as an F1-style arc gauge.

Per-core CPU breakdown — individual core usage bars with hover tooltips and selected-core legend

Per-core breakdown — hover any core to inspect its utilisation.

Fuel Gauge — doughnut chart showing memory and swap usage as a percentage with numeric readouts

Fuel Gauge — memory and swap consumption at a glance.

Disk & Network I/O

Tyre Wear chart — Chart.js line graph plotting disk read and write throughput over time

Tyre Wear — disk I/O throughput history (read vs write).

Port Radar

Port Radar — SVG track map visualising open ports with state filters, process names, and a colour-coded legend

Port Radar — open ports mapped onto the SVG track with state filtering.


Quick Start

# Clone and launch with Docker
git clone <repo-url> && cd F1
docker compose up --build

# Open http://localhost:3000

The dashboard will connect to the backend at localhost:8000 and begin streaming real-time telemetry.


Installation Options

Windows

Download and run the installer from the Releases page:

F1PitWall-Setup.exe

The installer creates:

  • Desktop shortcut
  • Start menu entry
  • Automatic browser launch

Linux (Snap)

sudo snap install f1pitwall

Or download the .snap file and install manually:

sudo snap install f1pitwall_1.0_amd64.snap --dangerous

Linux (Flatpak)

flatpak install io.github.f1pitwall

Or from a downloaded .flatpak bundle:

flatpak install --user f1pitwall.flatpak

Docker (Cross-platform)

docker compose up -d
# Open http://localhost:3000

Building Packages

Build Snap

cd packaging/snap
snapcraft
# Output: f1pitwall_1.0_amd64.snap

Build Flatpak

cd packaging/flatpak
flatpak-builder build-dir io.github.f1pitwall.yaml --force-clean
flatpak-builder --repo=repo build-dir io.github.f1pitwall.yaml
flatpak build-bundle repo f1pitwall.flatpak io.github.f1pitwall

Build Windows Executable

cd packaging\windows
build.bat
:: Output: dist\F1PitWall\F1PitWall.exe
:: If NSIS installed: F1PitWall-Setup.exe

Requirements:

  • Python 3.11+ with pip
  • Node.js 20+ with npm
  • PyInstaller: pip install pyinstaller
  • (Optional) NSIS for installer creation

Features

Component Metrics Collected
CPU Tachometer Total + per-core usage, frequency, load average
Fuel Gauge Memory + swap usage percentage
Tyre Wear Chart Disk read/write throughput, per-mount usage
Aero Streams Chart Network TX/RX throughput, interface states
Track Map Open ports with process names (via ss -tulpn)
Cooling Blades Fan RPM + temperatures (via lm-sensors)
Race Control Logs Journal/syslog entries with severity filtering
System Info Panel Hostname, OS, kernel, uptime, platform details
Link Speed Panel Download/upload dials, sparklines, peak tracking
Live Network Intel Connected devices, traffic breakdown, alerts, timeline
ARP Cache Neighbor table grouped by interface
WiFi Diagnostics Radio check, signal strength, adapter details
Warning Strip Green/yellow/red indicators based on thresholds
Toast Alerts Port open/close events, sensor warnings

Technical Highlights

  • WebSocket Streaming: 1-second cadence with envelope protocol ({type, ts, payload})
  • Collector Isolation: Each metric collector runs with timeout protection; failures degrade gracefully
  • Event System: Port changes and threshold breaches emit discrete events for toast notifications
  • Responsive Layout: Resizable panels with localStorage persistence
  • Security Defaults: Backend binds to 127.0.0.1 only; optional token authentication
  • Accessibility: ARIA landmarks, role attributes, skip-navigation, :focus-visible outlines, prefers-reduced-motion support
  • Performance: All leaf components wrapped in React.memo to minimise re-renders

Prerequisites

Requirement Version Notes
Docker + Compose 20.10+ Recommended for quick start
Python 3.11+ For local backend development
Node.js 20+ For local frontend development
lm-sensors Required for temperature/fan data
iproute2 Required for ss port scanning

Network Monitoring Requirements

Live packet insights rely on Wireshark's tshark CLI and a capture driver (Npcap on Windows, libpcap on Linux/macOS). Installers do not bundle these tools by default.

  • Windows: Install Wireshark and keep the Npcap option checked. When redistributing, you may bundle the Npcap redistributable alongside the installer for a smoother out-of-box experience.
  • macOS / Linux: Install wireshark or tshark via your package manager (e.g., brew install wireshark, sudo apt install tshark). Ensure the binary is on the PATH used by the backend service.
  • Fallback behavior: If tshark/Npcap are missing, the dashboard still runs. The "Live Network Intel" cards remain inactive and explain how to enable captures instead of blocking the rest of the UI.

If you download the F1 Dashboard standalone build, install Wireshark (with tshark) to unlock the network monitoring panels. Without it, the rest of the telemetry continues to operate normally.

Configuring TShark Path

If tshark is not on your system PATH, set the PITWALL_TSHARK_PATH environment variable:

# Windows PowerShell
$env:PITWALL_TSHARK_PATH = "C:\Program Files\Wireshark\tshark.exe"

# Or in Command Prompt
set PITWALL_TSHARK_PATH=C:\Program Files\Wireshark\tshark.exe
# Linux / macOS
export PITWALL_TSHARK_PATH=/usr/bin/tshark

Selecting a Capture Interface

The collector auto-detects active network interfaces, preferring Wi-Fi and Ethernet adapters over virtual ones (e.g., WSL, VMware, VPN tunnels). To override:

$env:PITWALL_CAPTURE_INTERFACE = "Wi-Fi"   # Windows
export PITWALL_CAPTURE_INTERFACE=eth0       # Linux

Note: After setting environment variables, restart the backend for changes to take effect.

Whitelisting Known Devices

Place friendly device names in backend/config/known_devices.json to prevent repeated "Unknown device" alerts and to surface human-readable labels in the dashboard:

{
  "d2:24:08:dd:ca:47": "Jason-Z-Flip7",
  "e8:bf:b8:8d:86:91": "GNU-Galaxybook",
  "8e:f3:1c:8f:13:88": "Roxana-Tab-S7-FE",
  "68:72:c3:79:2c:20": "Samsung-TV",
  "fa:d7:c8:be:69:2c": "Samsung-Stove",
  "02:33:56:75:73:11": "Unknown-Device-1",
  "6e:d3:88:b3:d9:67": "Unknown-Device-2"
}
  • Keys are case-insensitive and can be MAC addresses, hostnames, or static IPs gathered from your router export.
  • Values are the display names that appear in the "Live Network Intel" cards.
  • The backend loads this file on startup; restart the backend after editing the JSON to apply new entries.
  • When a MAC (or hostname) is listed here, the NetworkInsights collector suppresses the "Unknown device detected" alert for that device while still tracking its activity, and the frontend highlights the friendly name while keeping the MAC in the tooltip for quick verification.

Running Locally (Development)

Backend

cd backend
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Frontend

cd frontend
npm install
npm run dev -- --host 0.0.0.0 --port 3000

Open http://localhost:3000 — the dashboard auto-connects to ws://localhost:8000/stream.


Configuration

Backend Environment Variables

Variable Default Description
PITWALL_HOST 127.0.0.1 Bind address
PITWALL_PORT 8000 API port
PITWALL_REFRESH_INTERVAL 1.0 WebSocket broadcast interval (seconds)
PITWALL_HISTORY_POINTS 1800 Snapshot history buffer (30 min at 1s)
PITWALL_MAX_LOG_LINES 40 Log entries per snapshot
PITWALL_SENSOR_WARNING_THRESHOLD 85.0 Temperature alert threshold (°C)
PITWALL_ALLOW_ORIGINS http://localhost:3000 CORS allowed origins
PITWALL_AUTH_TOKEN When set, requires bearer token for API access
PITWALL_WS_SEND_TIMEOUT 2.0 WebSocket send timeout (seconds)
PITWALL_CAPTURE_ENABLED 1 Toggle the live packet capture panels
PITWALL_CAPTURE_INTERFACE auto-detect Override the interface name passed to tshark
PITWALL_TSHARK_PATH tshark Absolute path to the tshark executable
PITWALL_CAPTURE_DURATION 1.5 Seconds to sample per capture burst
PITWALL_CAPTURE_MAX_PACKETS 200 Packet limit per tshark invocation

Frontend Environment Variables

Variable Default Description
VITE_BACKEND_PORT 8000 Backend port for WebSocket connection
VITE_TELEMETRY_WS Full WebSocket URL (overrides auto-detection)

Docker Deployment

Build and Run

docker compose up --build -d

With Authentication

# Set auth token via environment
PITWALL_AUTH_TOKEN=supersecret docker compose up --build -d

Then connect with token in URL: ws://localhost:8000/stream?token=supersecret

Expose to Network

Edit docker-compose.yml to change backend binding:

ports:
  - "0.0.0.0:8000:8000"  # Instead of 127.0.0.1:8000:8000

⚠️ Security Warning: Always set PITWALL_AUTH_TOKEN when exposing beyond localhost.


API Reference

REST Endpoints

Endpoint Method Description
/healthz GET Health check (returns collector status)
/config GET Current configuration and thresholds
/metrics GET Single snapshot (requires token if set)
/stream WebSocket Live telemetry stream

WebSocket Protocol

The /stream endpoint sends JSON envelopes:

// Snapshot (every refresh interval)
{
  "type": "snapshot",
  "ts": "2025-12-02T18:30:00.123Z",
  "payload": {
    "cpu": { "cpu_total_percent": 45.2, "cpu_per_core_percent": [...], ... },
    "memory": { "mem_total_percent": 62.1, ... },
    "disk": { ... },
    "network": { ... },
    "sensors": { ... },
    "logs": [...],
    "ports": [...],
    "status": "ok",
    "collectors": { "cpu_memory": "ok", "disk": "ok", ... }
  }
}

// Event (on port/sensor changes)
{
  "type": "event",
  "ts": "2025-12-02T18:30:05.456Z",
  "payload": {
    "event": "port_opened",
    "port": "8080",
    "protocol": "tcp",
    "process_name": "node"
  }
}

// Status (on degradation changes)
{
  "type": "status",
  "ts": "2025-12-02T18:30:10.789Z",
  "payload": { "status": "degraded", "detail": "sensors collector failed" }
}

Acceptance Tests

Run these tests to verify full functionality:

✅ 1. Real-Time Updates (30 minutes)

# Open dashboard, leave running for 30 minutes
# Verify: No dropped frames, consistent 1s updates

✅ 2. CPU/Memory Accuracy (±2%)

# Compare dashboard readings to CLI tools
htop  # or top
free -m
# Verify: Values within 2% of CLI output

✅ 3. Port Detection (<2s)

# Open a new port and verify toast notification
python -m http.server 9999
# Verify: "Port Open" toast appears within 2 seconds
# Close the server and verify "Port Closed" toast

✅ 4. Sensor Graceful Degradation

# On system without lm-sensors
# Verify: Cooling panel shows "lm-sensors missing" in degraded state
# Verify: Other panels continue functioning

✅ 5. WebSocket Reconnect

# Restart backend while dashboard is open
docker compose restart backend
# Verify: Dashboard reconnects automatically with "RECONNECTING" status

✅ 6. Localhost Binding

# From another machine on the network
curl http://<host-ip>:8000/healthz
# Verify: Connection refused (backend bound to 127.0.0.1)

✅ 7. Fresh Clone Test

git clone <repo-url> F1-test && cd F1-test
docker compose up --build
# Verify: Dashboard accessible at http://localhost:3000

Troubleshooting

No Sensor Data

# Check if lm-sensors is installed and configured
sensors-detect  # Run once to configure
sensors -j      # Should output JSON

Docker: The container includes lm-sensors but may need host hardware access:

# docker-compose.yml - uncomment for full sensor access
privileged: true

Empty Port List

# Check if ss is available and has permission
ss -tulpnH
# If empty, try with sudo (may need privileged container)

WebSocket Disconnects

  • Check for network interruptions or proxy timeouts
  • Increase PITWALL_WS_SEND_TIMEOUT if clients are slow
  • Frontend auto-reconnects with exponential backoff (max 10s)

CORS Errors

Ensure PITWALL_ALLOW_ORIGINS includes your frontend URL:

PITWALL_ALLOW_ORIGINS=http://localhost:3000,http://192.168.1.100:3000

High Memory Usage

Reduce history buffer size:

PITWALL_HISTORY_POINTS=600  # 10 minutes instead of 30

Architecture

┌─────────────────────────────────────────────────────────────┐
│                         Frontend                            │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────────┐ │
│  │WarningStrip │  │ Tachometer  │  │ TelemetryLineChart  │ │
│  ├─────────────┤  ├─────────────┤  ├─────────────────────┤ │
│  │  FuelGauge  │  │NetworkPanel │  │    CoolingPanel     │ │
│  ├─────────────┤  ├─────────────┤  ├─────────────────────┤ │
│  │   PortMap   │  │ PortsTable  │  │     LogTicker       │ │
│  ├─────────────┤  ├─────────────┤  ├─────────────────────┤ │
│  │ LinkSpeed   │  │  ArpTable   │  │  NetworkInsights    │ │
│  ├─────────────┤  ├─────────────┤  ├─────────────────────┤ │
│  │SystemInfo   │  │  WifiModal  │  │    ToastStack       │ │
│  └─────────────┘  └─────────────┘  └─────────────────────┘ │
│              │                                              │
│              └──────────┬───────────────────────────────────┤
│                         │ useTelemetry() hook               │
│                         │ - WebSocket connection            │
│                         │ - History buffer (180 points)     │
│                         │ - Toast event dispatch            │
│                         │ - Auto-reconnect                  │
└─────────────────────────┼───────────────────────────────────┘
                          │ WebSocket /stream
                          ▼
┌─────────────────────────────────────────────────────────────┐
│                         Backend                             │
│  ┌─────────────────────────────────────────────────────┐   │
│  │                  TelemetryManager                    │   │
│  │  - Coordinates collectors                            │   │
│  │  - Builds snapshot envelopes                         │   │
│  │  - Generates port/sensor events                      │   │
│  │  - Tracks collector health                           │   │
│  └─────────────────────────────────────────────────────┘   │
│              │                                              │
│  ┌───────────┴───────────┐                                 │
│  │      Collectors       │                                 │
│  ├───────────────────────┤                                 │
│  │ CpuMemoryCollector    │ ← psutil                       │
│  │ DiskCollector         │ ← psutil                       │
│  │ NetworkCollector      │ ← psutil                       │
│  │ PortsCollector        │ ← ss -tulpn                    │
│  │ SensorsCollector      │ ← sensors -j                   │
│  │ LogsCollector        │ ← journalctl / /var/log/syslog │
│  │ SystemInfoCollector  │ ← platform/psutil              │
│  └───────────────────────┘                                 │
│                                                             │
│  REST: /healthz, /config, /metrics                         │
│  WebSocket: /stream                                        │
└─────────────────────────────────────────────────────────────┘

File Structure

F1/
├── backend/
│   ├── app/
│   │   ├── collectors/        # Metric collection modules
│   │   │   ├── base.py        # BaseCollector with timeout wrapper
│   │   │   ├── cpu_memory.py  # CPU + memory metrics
│   │   │   ├── disk.py        # Disk I/O + filesystem usage
│   │   │   ├── network.py     # Network throughput + interface stats
│   │   │   ├── ports.py       # Open ports via ss
│   │   │   ├── sensors.py     # Temperature + fan RPM via lm-sensors
│   │   │   ├── logs.py        # Journal/syslog entries
│   │   │   └── system_info.py # Hostname, OS, kernel, uptime
│   │   ├── config.py          # Settings with env var loading
│   │   ├── schemas.py         # Pydantic models for API
│   │   ├── telemetry.py       # TelemetryManager orchestrator
│   │   └── main.py            # FastAPI application
│   ├── Dockerfile
│   └── requirements.txt
├── frontend/
│   ├── src/
│   │   ├── components/        # React UI components
│   │   ├── hooks/             # useTelemetry WebSocket hook
│   │   ├── utils.ts           # Shared helpers (formatBytes, endpoint builders)
│   │   ├── types.ts           # TypeScript interfaces
│   │   ├── styles.css         # Pit wall theme CSS
│   │   ├── App.tsx            # Main dashboard layout
│   │   └── main.tsx           # Entry point
│   ├── Dockerfile
│   └── package.json
├── docker-compose.yml
├── CONTRIBUTING.md
├── LICENSE
├── packaging/
│   ├── snap/                  # Snapcraft packaging
│   │   ├── snapcraft.yaml
│   │   ├── launcher.sh
│   │   └── f1pitwall.desktop
│   ├── flatpak/               # Flatpak packaging
│   │   ├── io.github.f1pitwall.yaml
│   │   ├── launcher.sh
│   │   └── io.github.f1pitwall.desktop
│   └── windows/               # PyInstaller + NSIS
│       ├── f1pitwall.spec
│       ├── launcher.py
│       ├── build.bat
│       └── installer.nsi
├── website/                   # Landing page
│   └── index.html
└── README.md

License

GNU General Public License v3.0 (GPLv3)


Known Limitations

Sensor Access

  • WSL/VMs: lm-sensors cannot access host hardware sensors from within WSL or most virtual machines. The dashboard will show degraded status for the Cooling panel.
  • Docker: Sensor access may require running the container with --privileged flag or mounting /dev and /sys directories.
  • Windows: Temperature and fan data is not available through the standard Python libraries. The Cooling panel will show degraded status.

Port Scanning

  • Non-root users: Port scanning via ss may not show process names without elevated privileges.
  • Docker: Port information reflects the container's network namespace, not the host.

Log Collection

  • journalctl access: Requires systemd and appropriate permissions.
  • macOS: Falls back to reading /var/log/system.log which may have limited entries.

Network Monitoring

  • Virtual interfaces: Docker/Kubernetes create many virtual interfaces that may clutter the interface list.
  • Interface filtering: Currently shows all interfaces; future versions may add filtering options.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for:

  • Development environment setup
  • Code style guidelines
  • Pull request process
  • Testing requirements

Downloads

Pre-built installers are available on the Releases page:

Platform File Notes
Windows F1PitWall-Setup.exe Installer with Start Menu entry
Linux f1pitwall_1.0_amd64.snap Snap package
Linux f1pitwall.flatpak Flatpak bundle
All Docker image Use docker compose up

About

Real-time system telemetry dashboard inspired by a Formula 1 pit wall. FastAPI backend + React/D3 frontend with WebSocket streaming.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors