Panvex is a control-plane and web dashboard for managing multiple Telemt nodes.
cmd/control-planeruns the operator HTTP API and the agent gRPC gateway.cmd/agentruns a local agent that talks to Telemt over loopback only.internal/controlplane/...contains auth, jobs, presence, state, and server logic.internal/agent/...contains Telemt client, runtime orchestration, and agent state helpers.internal/gatewayrpccontains the shared gRPC transport contract used by the control-plane and the agent.webcontains the React dashboard.db/migrationsanddb/queriescontain the initial PostgreSQL schema and sqlc query set.protocontains the human-readable gateway contract.- local working plans belong in
.tmp/plans/and stay out of git.
For a single-binary release install, use the release installer:
curl -fsSL https://github.com/panvex/panvex/releases/latest/download/install.sh | shThe installer defaults to SQLite.
When run as root it installs the binary into /usr/local/bin, writes runtime files into /etc/panvex and /var/lib/panvex, and installs a disabled-but-enabled systemd unit.
When run without root it installs under ~/.local and writes a local start script instead of a systemd unit.
To install against PostgreSQL instead:
PANVEX_STORAGE_DRIVER=postgres \
PANVEX_STORAGE_DSN='postgres://panvex:password@127.0.0.1:5432/panvex?sslmode=disable' \
curl -fsSL https://github.com/panvex/panvex/releases/latest/download/install.sh | shInstaller help is available through:
bash deploy/install.sh --helpFor a split Docker deployment with SQLite:
docker compose -f deploy/docker-compose.sqlite.yml up --build -d
docker compose -f deploy/docker-compose.sqlite.yml exec backend \
./panvex-control-plane bootstrap-admin \
-storage-driver sqlite \
-storage-dsn /var/lib/panvex/panvex.db \
-username admin \
-password '<strong-password>'The dashboard is then available on http://127.0.0.1:8080, while the agent gRPC gateway is exposed on 127.0.0.1:8443.
For a split Docker deployment with PostgreSQL:
docker compose -f deploy/docker-compose.postgres.yml up --build -d
docker compose -f deploy/docker-compose.postgres.yml exec backend \
./panvex-control-plane bootstrap-admin \
-storage-driver postgres \
-storage-dsn 'postgres://panvex:panvex@postgres:5432/panvex?sslmode=disable' \
-username admin \
-password '<strong-password>'The SQLite compose file keeps SQLite as the default storage mode. The PostgreSQL compose file introduces PostgreSQL explicitly and does not change the SQLite default path.
This is the split development workflow with a standalone Go backend and a Vite development server.
You do not need prebuilt frontend assets for:
go build ./...go run ./cmd/control-plane
-
Bootstrap the first local admin:
go run ./cmd/control-plane bootstrap-admin ` -username admin ` -password "<strong-password>"
By default this writes the first admin into the SQLite database at
data/panvex.db. To target PostgreSQL instead:go run ./cmd/control-plane bootstrap-admin ` -storage-driver postgres ` -storage-dsn "postgres://panvex:password@127.0.0.1:5432/panvex?sslmode=disable" ` -username admin ` -password "<strong-password>"
-
Start the control-plane:
go run ./cmd/control-plane -http-addr :8080 -grpc-addr :8443
The default startup backend is SQLite at
data/panvex.db. To start against PostgreSQL:go run ./cmd/control-plane ` -http-addr :8080 ` -grpc-addr :8443 ` -storage-driver postgres ` -storage-dsn "postgres://panvex:password@127.0.0.1:5432/panvex?sslmode=disable"
-
Start the web dashboard:
cd web npm install npm run dev
-
Build the embedded frontend assets:
cd web npm install npm run build:embed -
Build the control-plane binary with the embedded UI release tag:
go build -tags embeddedui -o panvex-control-plane ./cmd/control-plane
-
Bootstrap the first admin:
./panvex-control-plane bootstrap-admin -username admin -password '<strong-password>'The bootstrap user starts with TOTP disabled by default.
-
Start the single-binary release:
./panvex-control-plane -http-addr :8080 -grpc-addr :8443
The dashboard is served by the same binary on
http://127.0.0.1:8080.
Panvex keeps TOTP optional by default for local accounts, including the first admin.
- Sign in with username and password when TOTP is disabled.
- Open
Settingsand useOptional two-factor authenticationto start setup. - Confirm setup with the current password and a fresh code from the authenticator app before TOTP becomes active.
- Disable active TOTP with the current password and current TOTP code.
For multi-user installs, admins can open Settings and use Admin TOTP recovery to reset another user's TOTP from the web panel.
For emergency recovery on the server, reset a local user's TOTP through the control-plane CLI:
./panvex-control-plane reset-user-totp \
-username adminIf the control-plane is using PostgreSQL instead of the default SQLite backend, pass the storage flags explicitly:
./panvex-control-plane reset-user-totp \
-storage-driver postgres \
-storage-dsn 'postgres://panvex:password@127.0.0.1:5432/panvex?sslmode=disable' \
-username admin-
Create an enrollment token from the dashboard Settings screen.
-
On the Linux server that runs Telemt, install the agent:
curl -fsSL https://github.com/panvex/panvex/releases/latest/download/install-agent.sh | \ sudo sh -s -- \ --panel-url https://panel.example.com \ --enrollment-token "<token>"
The installer downloads the agent, asks for the local Telemt API settings, bootstraps the agent identity through the panel HTTPS API, and starts a
systemdservice. -
For an advanced manual flow, bootstrap a downloaded binary directly without saving any
ca_pemfile:./panvex-agent bootstrap \ -panel-url https://panel.example.com \ -enrollment-token "<token>" \ -state-file /var/lib/panvex-agent/agent-state.json
Panvex can now manage Telemt clients centrally instead of editing them node by node.
Open Clients in the dashboard to:
- create a managed client with generated secret and
user_ad_tag - set Telemt limits such as
max_tcp_conns,max_unique_ips, quota, and expiration - assign the client by fleet group or explicit nodes
- rotate the client secret later without recreating the whole client
After each save, Panvex immediately queues rollout jobs for the selected nodes. Each node returns its own Telegram connection link, so the client detail page shows:
- the current deployment status on every node
- the latest node-specific connection link
- aggregated current usage such as traffic, unique IPs, and active TCP connections
go build ./...npm run buildfromwebnpm run build:embedfromwebgo build -tags embeddedui ./cmd/control-planebash deploy/install.sh --helpbash deploy/install-agent.sh --helpgo test ./cmd/agent ./internal/agent/state -vrg -n "services:|sqlite|postgres|web:" deploy/docker-compose.sqlite.yml deploy/docker-compose.postgres.yml Dockerfilego test ./cmd/control-plane -run "TestRunBootstrapAdmin|TestRunResetUserTotp" -vgo test ./internal/controlplane/auth ./internal/controlplane/server -run "TestServiceBootstrapUserLeavesTotpDisabledByDefault|TestServiceAuthenticateAllowsOperatorWithoutTotpWhenDisabled|TestServiceEnableTotpRequiresPendingSetup|TestServiceEnableTotpRequiresValidPasswordAndCode|TestServiceDisableTotpRequiresValidPasswordAndCode|TestServiceResetTotpClearsEnabledState|TestHTTPAuthTotpSetupEnableDisableFlow|TestHTTPUsersTotpResetRequiresAdminAndClearsTarget" -vgo test ./internal/controlplane/auth ./internal/controlplane/jobs ./internal/controlplane/server ./internal/controlplane/state ./internal/controlplane/storage/migrate ./internal/controlplane/storage/postgres ./internal/controlplane/storage/sqlite -v