LocalPlane is a containerized Linux edge-device control panel. It started as
r1000-network-panel for one reComputer box, but it is now evolving into a
more general local control plane for Linux edge gateways. The current
reference target is a Seeed reComputer R1035-10 / R1000-class device running
Ubuntu Server, but the architecture is moving toward a generic, capability-driven
control surface for small Linux gateways, field-service boxes and homelab nodes.
The product direction is simple:
Observe -> Analyze -> Plan -> Apply -> Verify -> ExplainInstead of being a pile of one-off buttons, LocalPlane is becoming a small local control plane: discover host capabilities, show current state, preview risky changes, apply only owned configuration, verify the result and keep enough history to recover.
LocalPlane is built to solve a specific real-world use case:
- operate a compact Linux edge device in the field or at home
- keep remote access, uplinks, LANs, Wi-Fi, cellular and local services visible
- configure discovered interfaces instead of hardcoding one device layout
- preview and audit host changes before touching routing, firewall or Wi-Fi state
- show capabilities only when the host can actually support them
- keep runtime data, sessions, backups and secrets out of git
It is not intended to be a full enterprise router replacement. It is a focused management portal for a self-hosted edge gateway.
- Dashboard with uplink/LAN/Wi-Fi/service overview, live sessions and system state
- View-aware frontend polling with cached endpoints, hidden-tab pause and non-overlapping requests
- Interface inventory and flexible hotplug direction for USB Ethernet/Wi-Fi/LTE devices
- Preview-only desired interface configs for LAN, Wi-Fi, cellular and tunnel behavior
- Preview-first route/firewall/reconcile foundation
- Wi-Fi client and hotspot mode controls
- Cellular/LTE/5G-oriented APN and modem tooling
- Guarded AT command interface
- Pi-hole and NetAlertX visibility
- Local service discovery and quick access
- Docker container overview
- File system and storage visibility
- Samba file sharing overview and share management
- Device I/O visibility for LEDs, serial ports, GPIO and RS-485
- Admin menu with sync, restart, power off and logout controls
- Session-cookie based login
- Dockerized backend with Vite frontend build output
- Screenshot-safe
?demo=1mode for public docs and release images
Short version of the current plan:
- Keep the core stable: authentication, cached polling, events, action history, providers and documentation.
- Make Network Control reliable: interface inventory, neutral interface configs, diagnostics, route/firewall preview and reconcile.
- Add safe apply flows only after preview/backup/verify/rollback are solid.
- Improve flexible hardware support for USB Ethernet, Wi-Fi adapters and LTE/5G modems.
- Expand runtime/app-provider ideas later, after networking is dependable.
The project is currently tailored around the Seeed reComputer R1035-10 / R1000 family.
Typical target setup:
- Raspberry Pi CM4-based reComputer device
- Ubuntu Server
- NVMe SSD as primary storage
- eMMC available as fallback or secondary storage
- Ethernet LAN ports
- Wi-Fi interface
- LTE modem support
- RS-485 capable device ports
- USER RGB LEDs and ACT/PWR LED exposure through Linux sysfs
On the reComputer R1035-10, the Seeed overlay should be loaded as:
dtoverlay=reComputer-R100xin:
/boot/firmware/config.txtDo not add the uart2 overlay parameter on this hardware revision unless you are intentionally testing the older alternate pin mapping. It can move the USER RGB LED mapping away from the PCA9535 expander used by this device.
The panel expects USER RGB LEDs at:
/sys/class/leds/led-red
/sys/class/leds/led-green
/sys/class/leds/led-blueand ACT exposed as:
/sys/class/leds/ACTExpected RS-485 ports are shown in the UI as:
/dev/ttyAMA2
/dev/ttyAMA3
/dev/ttyAMA5LocalPlane is split into a FastAPI backend, a Vite frontend and a small set of domain/provider modules.
LocalPlane/
├── assets/
│ └── images/
├── backend/
│ ├── app/
│ │ ├── main.py
│ │ ├── api/
│ │ ├── core/
│ │ ├── domain/
│ │ └── providers/
│ ├── data/
│ ├── scripts/
│ ├── Dockerfile
│ └── requirements.txt
├── frontend/
│ ├── src/
│ │ ├── app.js
│ │ └── styles.css
│ ├── dist/
│ ├── index.html
│ ├── package.json
│ └── vite.config.js
├── scripts/
├── docker-compose.yml
├── .dockerignore
├── .gitignore
└── README.mdThe backend split follows this rule:
providers read/apply host state
domain code decides and plans
api routers expose data
frontend displays state and asksMore detailed operator and development notes live in:
docs/
├── index.md
├── guide.md
├── networking.md
├── configuration.md
├── security.md
└── development-backlog.mdThe README is the project landing page. The docs directory is the working operator/developer guide for setup flow, capability-driven UI, safe networking changes, backups, security and future product direction.
The backend is a Python/FastAPI service running inside Docker.
The backend is responsible for:
- serving API endpoints through focused routers
- collecting device, network, service and provider status
- reading interface, route, DNS, firewall, Docker and tunnel information
- managing session authentication and panel settings
- previewing guarded host actions before execution
- exposing Wi-Fi, LAN, cellular, file sharing, monitoring and device I/O data
- serving the compiled frontend in production mode
Backend data files are stored under:
backend/data/Runtime JSON files such as auth/session/config state are ignored by git. Only
backend/data/.gitkeep is tracked.
Backend helper scripts live under:
backend/scripts/Current backend scripts include:
service-lan-inet-off.sh
service-lan-inet-on.sh
service-lan-ra.pyThese support the old local-client LAN internet policy and router advertisement behavior. They are legacy helpers and should be replaced by the interface planner/reconciler path before hard apply is expanded.
The frontend is a Vite-based web interface.
Main frontend files:
frontend/src/app.js
frontend/src/styles.cssThe frontend provides the browser UI for the panel. It talks to the backend through /api.
During development:
cd frontend
npm install
npm run devThe Vite dev server proxies API requests to:
http://127.0.0.1:8080Production images build the frontend and serve the compiled files from the backend container.
Compiled frontend output is generated under:
frontend/dist/The frontend includes a screenshot-safe demo mode for documentation and release images.
Open the dev frontend with:
http://10.0.0.1:5173/?demo=1Demo mode is frontend-only and does not change backend data or API behavior. It masks sensitive values such as hostnames, IP addresses, Tailscale addresses, IPv6 addresses, MAC addresses, SSIDs, usernames, session ports, log messages, and command payloads while keeping the UI layout realistic for screenshots.
- Ubuntu Server
- Docker
- Docker Compose
- Python
- FastAPI
- Uvicorn
- Vite
- JavaScript
- CSS
- Pi-hole
- NetAlertX
- Tailscale
- Cockpit
- Grafana
- Prometheus
- Node Exporter
- Samba
LocalPlane is designed to sit next to common self-hosted infrastructure services.
Currently integrated or visible services include:
- Tailscale
- Cockpit
- Grafana
- Prometheus
- Node Exporter
- Pi-hole
- NetAlertX
- Samba
- VirtualHere
The dashboard and service pages provide quick status visibility and access points for these tools.
Normal users do not need to run frontend commands. Docker builds the web UI for you.
- Clone the project and enter the folder.
git clone https://github.com/ergunozgur/LocalPlane.git
cd LocalPlane- Create your local config file.
cp .env.example .env- Edit
.envand set your panel password.
nano .envAt minimum, change:
PANEL_PASSWORD=change-me-before-first-run- Start the panel.
docker compose up -d --build backend- Open the web UI.
http://<device-ip>:8080Default username:
adminPassword: the value you put in .env.
That is it. The frontend is built inside Docker during the image build.
Useful checks:
docker compose ps
docker compose logs --tail=80 backendThe backend container runs Uvicorn on:
0.0.0.0:8080Rebuild after code changes:
docker compose up -d --build backendFollow logs:
docker compose logs -f backendThe web panel uses a session-cookie login screen.
Set the panel password before starting the service:
PANEL_PASSWORD=change-me-before-first-runPANEL_USERNAME defaults to admin unless overridden.
Signed-in users can change the panel username and password from the Users page. The updated credential hash is stored in:
/app/data/panel-auth.jsonActive panel sessions are stored in:
/app/data/panel-sessions.jsonThis allows backend/container restarts without forcing a fresh login while the browser cookie is still valid.
Before exposing the panel beyond a trusted local network, change:
PANEL_USERNAME
PANEL_PASSWORDin the deployment configuration.
The panel is moving toward discovered interfaces first. The operator should see the actual kernel interface, stable identity and capability before deciding what that interface should do.
Long-term, every useful NIC/radio/modem should appear as inventory:
- kernel name, such as
eth0,enx...,wlan0,wwan0 - stable identity when possible, such as MAC address, sysfs path or modem/SIM ID
- detected kind: Ethernet, Wi-Fi, cellular, tunnel, bridge or virtual
- compatible behavior the operator can configure
Example behaviors include:
- trusted local client network
- isolated client network
- Ethernet uplink
- Wi-Fi uplink
- Wi-Fi hotspot
- Cellular uplink
- remote management tunnel
All Ethernet/LAN-style interfaces should share the same base settings model: interface identity, IPv4/IPv6 addressing, DHCP/RA, DNS, Pi-hole policy and internet sharing. Differences should come from selected behavior and hardware capability, not from completely separate hardcoded forms.
Hard network changes are expected to move toward preview/apply/verify/rollback. The current reconcile endpoint is intentionally preview-first.
The redesigned interface uses a dark operational dashboard style with grouped navigation, live status cards, action previews, and capability-driven network pages. It is built to make Linux edge boxes easier to operate as both field-service gateways and home lab infrastructure nodes.
The login page keeps panel access behind a local session-cookie flow before exposing device controls.
It is intentionally simple so screenshots and field use both start from a clear, secure entry point.
The dashboard provides the main operational overview of the device. It summarizes uplinks, LAN state, wireless state, running containers, live sessions, local ports, detected clients, and key system health metrics.
The network map gives a quick visual relationship between internet, LAN, wireless, services, and the device itself.
The top navigation groups related tools into clear sections such as Dashboard, Network, Logs, Services, and Device. Dropdown menus keep secondary pages available without crowding the main interface.
The Network page is being moved from fixed LAN roles to discovered interface configs. It should show each discovered interface, link state, internet policy, Pi-hole policy, addressing mode, DNS behavior and connected clients.
Interface cards are grouped by uplinks, LAN ports, wireless, and virtual interfaces so physical and logical links are easier to scan.
The Cellular page centralizes LTE and modem-related controls. It shows registration state, operator, signal metrics, APN profile information, uplink preference, and guarded AT command access.
APN presets and manual profile options are kept together so cellular behavior can be adjusted without mixing it into Wi-Fi or LAN pages.
The Wireless page focuses on Wi-Fi client, hotspot, scan, and radio behavior.
Live state, configuration, connected clients, visible networks, and radio details are grouped into collapsible sections to keep the page readable while still exposing advanced information when needed.
The Monitoring page combines Pi-hole, NetAlertX, and network visibility controls.
It shows DNS forwarding status, Pi-hole binding information, NetAlertX discovery status, active scan targets, discovery scope, and high-level network state.
The Logs page provides recent panel, network, and service activity in a compact table.
It is useful for checking dashboard sync events, detected listeners, DNS/SSH/session activity, Docker status, and service-level events without leaving the panel.
The Services page presents detected local listeners as compact service cards.
It highlights named services, protocol and source tags, online state, known entry points, and port-only listeners without forcing the operator into terminal inspection for every local service.
The File System page displays key mounts, storage devices, external or removable storage, and other mounted paths.
It helps verify NVMe, eMMC, boot partitions, overlay mounts, and available capacity from the web interface.
The File Sharing page provides Samba share visibility and basic share management.
It shows running Samba services, configured shares, guest/read-only policy, valid users, and placeholder printing status for future CUPS support.
The Users page keeps panel account management and Samba user management in one place.
It supports panel credential updates, SMB account visibility, and Samba account actions while keeping authentication state separate from normal network controls.
The Device I/O page exposes kernel-visible LEDs, serial ports, GPIO chips, expected RS-485 ports, and LED policies.
It is designed for hardware validation on the reComputer R1000 platform, including ACT/PWR/user LED behavior and expansion readiness.
The LoRaWAN page is a reserved area for future radio-module workflows.
It is intended to hold LoRaWAN, Meshtastic, radio profiles, and related settings without scattering radio features across the dashboard.
The admin menu provides session information, manual sync, logout, restart, and power-off controls.
It keeps high-impact actions separated from normal network pages while still making them accessible for local administration.
Configuration changes can be reviewed before execution.
The review dialog shows the editable request payload and the exact host commands that will run, reducing the risk of applying network or wireless changes blindly.
The current UI includes or reserves pages for:
- Dashboard
- Network
- Interfaces
- Wireless
- Cellular
- Diagnostics
- Routing / Firewall
- Reconcile
- Monitoring
- Logs
- Services
- Terminal
- File System
- File Sharing
- Runtime
- Actions
- Panel Settings
- Device I/O
- LoRaWAN / Meshtastic
- Admin controls
- Review and run dialogs
- LTE status visibility and APN-related controls
- LAN/client network management
- Internet on/off controls for selected interfaces
- View-aware frontend polling and endpoint caching
- Interface inventory and neutral config previews
- Network diagnostics and reconcile preview foundation
- Wireless client and hotspot mode handling
- Local service discovery and quick access
- Monitoring-oriented service integration
- Initial dual-stack networking support
- Dashboard network map
- Live session visibility
- Docker service summary
- File system visibility
- Samba visibility and share management
- Device LED, serial port and GPIO visibility
- Admin sync/restart/power controls
- Review-before-run flow for network actions
- keep frontend polling cheap and view-aware on low-power devices
- continue extracting legacy backend code into routers, domain modules and providers
- make network control reliable before adding any app marketplace/runtime expansion
- move legacy LAN routing and RA behavior from one-shot scripts toward idempotent reconcile plans
- improve neutral interface configs for hotplug Ethernet, Wi-Fi and LTE/5G devices
- add safer action preview, execute, verify, history and rollback flows
- keep the UI compact, with dense details moved into focused panels instead of crowded cards
The detailed roadmap lives in docs/development-backlog.md.
- first-run setup wizard with capability detection and admin credential setup
- stable interface registry for hotplug adapters and modems
- controlled uplink failover across Ethernet, Wi-Fi and cellular
- nftables-owned firewall/NAT reconciliation with rollback
- richer LTE/5G modem and APN handling
- Wi-Fi hotspot/client workflow hardening
- backup and restore tooling for runtime and host config changes
- exposure analyzer for admin services and unsafe binds
- optional API keys, TOTP and stronger browser-session protections
- route/topology map with focused node detail windows
- optional app/runtime provider work only after networking is reliable
This section is only for people changing the code.
Frontend development server:
cd frontend
npm install
npm run devThe dev server proxies API requests to the Docker backend on 127.0.0.1:8080.
Normal Docker users do not need this step.
Backend validation:
python3 -m compileall -q backend/appFrontend build validation:
cd frontend
npm run buildDocker rebuild:
cd ~/LocalPlane
docker compose up -d --build backendCheck backend logs:
docker compose logs --tail=80 backendFor large redesign work, create a WIP patch before continuing:
git diff > ~/localplane-dashboard-redesign-wip.patchFor a full working-tree backup including untracked files:
tar -czf ~/localplane-dashboard-redesign-wip-full.tar.gz \
README.md backend/Dockerfile backend/app docker-compose.yml \
.dockerignore frontend scriptsA full patch including untracked files can be created by temporarily staging files:
git add README.md backend/Dockerfile backend/app docker-compose.yml .dockerignore frontend scripts
git diff --cached > ~/localplane-dashboard-redesign-wip-full.patch
git resetThis does not create a commit.
This platform is designed to serve both as:
- a field-service edge device
- a home lab infrastructure node
It provides selective internet access control for service-facing interfaces, which is especially useful when operating over limited LTE data plans or when working with devices that should only receive controlled connectivity.
LocalPlane is actively evolving. The current version is a practical working prototype with a redesigned dashboard, interface inventory, monitoring pages, service visibility, wireless/cellular controls and hardware-focused device pages.
The project is suitable for homelab experimentation, edge gateway prototyping, and field-service workflow development.















