Conversation
Replace hardcoded placeholder host with window.location.origin when generating Linux and Windows install scripts so the scripts use the current deployment origin at runtime. Also adjust the Linux install flag from `--server-url` to `--server` to match the expected install script parameter. These changes ensure generated installer commands point to the correct server and include the registration token.
Replace hardcoded repository URL from serverkit/serverkit to jhd3197/ServerKit across agent Dockerfile, README, packaging scripts (deb/rpm) and install scripts so downloads and metadata point to the correct repo. Backend install endpoints now replace the repo placeholder with the GITHUB_REPO constant, and the frontend Windows install command was adjusted to use the -Server parameter instead of -ServerUrl. These changes ensure install scripts, packages, and generated docs reference the intended repository and that generated installer commands match the agent's expected flags.
Update backend/requirements.txt to upgrade Flask from 3.0.0 to 3.1.3 and Werkzeug from 3.1.5 to 3.1.6. Applies minor/patch updates for bug fixes and dependency compatibility in the backend.
Move the backend into docker-compose and switch the frontend to a multi-stage Docker build. Key changes: - Added backend/.dockerignore to ignore virtualenv, pycache, env files, instance, etc. - Updated backend Flask app to safely serve index.html only if it exists and return JSON status/errors when frontend assets are absent. - Removed docker-compose.dev.yml and expanded docker-compose.yml to run both backend and frontend in Docker (backend service, healthcheck, restart/logging, new ports 5849:5000 & 3847:80, and named volumes). - Frontend changes: remove file ignores that prevented in-container builds, replace nginx-only image with a Node builder stage + nginx runtime (builds assets in Docker), and update nginx proxy to target the backend service (proxy_pass http://backend:5000). - Updated dev script (scripts/dev/dev.bat) to use plain `docker compose` commands and the new service/container names and ports. These changes enable fully containerized local/dev runs where the frontend is built inside Docker and the backend runs as a service in the compose network.
There was a problem hiding this comment.
Pull request overview
This PR containerizes the full stack (backend + frontend) under a single docker-compose.yml, updates reverse-proxy wiring for container-to-container networking, and fixes lingering GitHub repo reference placeholders so install/packaging flows point at the correct repository.
Changes:
- Unifies Docker orchestration into one
docker-compose.ymlrunning both services, removing the dev compose file and updating dev helper scripts/ports. - Switches the frontend to a multi-stage Docker build (Node build → nginx serve) and updates nginx proxy targets to the backend service name.
- Updates install scripts/UI strings and agent packaging metadata to use the new GitHub repo and more accurate install command parameters.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
docker-compose.yml |
Adds backend service + unified networking/volumes; updates ports and healthchecks. |
docker-compose.dev.yml |
Removes the separate dev compose configuration. |
frontend/Dockerfile |
Replaces “copy dist” approach with multi-stage build (Node → nginx). |
frontend/.dockerignore |
Stops excluding build-required sources so the in-container build works. |
frontend/nginx.conf |
Proxies /api and /socket.io to backend:5000 instead of the host. |
frontend/src/pages/Servers.jsx |
Generates install one-liners using window.location.origin and aligns flag names. |
backend/app/__init__.py |
Avoids crashing when frontend assets are absent; returns JSON fallback. |
backend/app/api/servers.py |
Adds GitHub repo replacement when serving install scripts. |
backend/requirements.txt |
Bumps Flask/Werkzeug versions. |
backend/.dockerignore |
Adds Docker ignore rules for backend container builds. |
scripts/dev/dev.bat |
Updates dev helper to use the unified compose file and new ports/container name. |
scripts/install.sh |
Updates default GitHub repo reference for agent downloads. |
scripts/install.ps1 |
Updates default GitHub repo reference for agent downloads. |
agent/Dockerfile |
Updates OCI source label to the new GitHub repo URL. |
agent/README.md |
Updates GitHub release download URLs to the new repo. |
agent/packaging/deb/build.sh |
Updates repo URLs in package metadata/systemd unit docs. |
agent/packaging/rpm/build.sh |
Updates repo URLs in spec/unit metadata. |
VERSION |
Version bump. |
Comments suppressed due to low confidence (1)
frontend/Dockerfile:28
- The Dockerfile comment says “Create non-root user for nginx”, but the image still runs as the default user (root) and binds to port 80. Either adjust the wording (since this is only changing ownership), or actually drop privileges (e.g., run nginx as
USER nginxand listen on an unprivileged port such as 8080).
# Create non-root user for nginx
RUN chown -R nginx:nginx /usr/share/nginx/html && \
chown -R nginx:nginx /var/cache/nginx && \
chown -R nginx:nginx /var/log/nginx && \
chown -R nginx:nginx /etc/nginx/conf.d
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| - "host.docker.internal:host-gateway" | ||
| volumes: | ||
| - ./nginx/ssl:/etc/nginx/ssl:ro | ||
| - "3847:80" |
There was a problem hiding this comment.
frontend is published on all interfaces (3847:80). Previously it was bound to 127.0.0.1 only; consider keeping a loopback bind by default (and relying on a reverse proxy for public access) to avoid unintentionally exposing the UI.
| - "3847:80" | |
| - "127.0.0.1:3847:80" |
| with open(script_path, 'r', encoding='utf-8') as f: | ||
| content = f.read() | ||
|
|
||
| # Replace placeholder URLs with actual server URL | ||
| # Replace placeholders with actual values | ||
| server_url = request.url_root.rstrip('/') |
There was a problem hiding this comment.
In Docker builds where the backend image is built from the ./backend context, _get_scripts_dir() will resolve to a path that likely doesn’t include the repo-root scripts/ directory. That means with open(script_path, ...) here will never be reached (the endpoint will 404 earlier) unless the install scripts are copied into the image. Consider packaging/copying scripts/install.* into the backend image (or adjusting build context) so these endpoints work in the containerized setup.
| # WebSocket support for socket.io (running on host, not in Docker) | ||
| location /socket.io { | ||
| proxy_pass http://host.docker.internal:5000; | ||
| proxy_pass http://backend:5000; | ||
| proxy_http_version 1.1; |
There was a problem hiding this comment.
Same as /api: this comment says socket.io is “running on host, not in Docker”, but the config now proxies to the backend service. Please update the comment to match the new containerized architecture.
| services: | ||
| backend: | ||
| build: | ||
| context: ./backend |
There was a problem hiding this comment.
The backend image is built with context: ./backend, but get_install_script_linux/windows() reads from a scripts/ directory located at the repo root. In a container built from the backend-only context, that directory won’t be present, so /api/v1/servers/install.sh and /api/v1/servers/install.ps1 will always return “Installation script not found”. Consider either building the backend with the repo root as build context (and copying scripts/ into the image) or relocating the install scripts into the backend package and referencing them via a path inside the image.
| context: ./backend | |
| context: . |
| - DATABASE_URL=sqlite:////app/instance/serverkit.db | ||
| - CORS_ORIGINS=http://localhost,http://localhost:3847,http://localhost:5173 | ||
| ports: | ||
| - "5849:5000" |
There was a problem hiding this comment.
backend is published on all interfaces (5849:5000). If the expected setup is “host reverse proxy in front”, binding to 127.0.0.1 by default would reduce accidental network exposure (e.g. 127.0.0.1:5849:5000).
| - "5849:5000" | |
| - "127.0.0.1:5849:5000" |
| with open(script_path, 'r', encoding='utf-8') as f: | ||
| content = f.read() | ||
|
|
||
| # Replace placeholder URLs with actual server URL | ||
| # Replace placeholders with actual values | ||
| server_url = request.url_root.rstrip('/') |
There was a problem hiding this comment.
Same issue as the Linux script: when running in Docker with the backend built from the ./backend context, the repo-root scripts/install.ps1 file won’t be present in the image, so this endpoint will 404. Consider copying the scripts into the backend image (or changing build context) so script_path exists at runtime.
| # Proxy API requests to backend (running on host, not in Docker) | ||
| location /api { | ||
| proxy_pass http://host.docker.internal:5000; | ||
| proxy_pass http://backend:5000; | ||
| proxy_http_version 1.1; |
There was a problem hiding this comment.
The comments above these location blocks still say the backend is “running on host, not in Docker”, but proxy_pass now targets the Docker service name (backend). Updating/removing the comment will avoid confusion for future readers.
| volumes: | ||
| - serverkit-data:/app/instance | ||
| - serverkit-config:/etc/serverkit | ||
| networks: |
There was a problem hiding this comment.
backend runs as non-root (USER appuser in the backend Dockerfile), but the named volumes mounted at /app/instance (SQLite DB) and /etc/serverkit will be created as root-owned directories by Docker. This will likely cause runtime permission errors when the app tries to create /app/instance/serverkit.db or write JSON configs under /etc/serverkit. Consider adding an entrypoint/init step that chowns these mount points on startup (as root) before dropping privileges, or run the container as root only long enough to fix permissions.
| - FLASK_ENV=development | ||
| - SECRET_KEY=dev-secret-key-not-for-production | ||
| - JWT_SECRET_KEY=dev-jwt-key-not-for-production |
There was a problem hiding this comment.
docker-compose.yml hard-codes FLASK_ENV=development and fixed SECRET_KEY / JWT_SECRET_KEY values. This makes it easy to accidentally run an insecure configuration in non-dev environments, and these values also bypass the production-mode default-key checks in backend/config.py (they’re not in INSECURE_SECRET_KEYS). Prefer reading these from .env (e.g., ${SECRET_KEY} / ${JWT_SECRET_KEY}) and defaulting FLASK_ENV to production (or using compose profiles for dev).
| - FLASK_ENV=development | |
| - SECRET_KEY=dev-secret-key-not-for-production | |
| - JWT_SECRET_KEY=dev-jwt-key-not-for-production | |
| - FLASK_ENV=${FLASK_ENV:-production} | |
| - SECRET_KEY=${SECRET_KEY} | |
| - JWT_SECRET_KEY=${JWT_SECRET_KEY} |
| curl -LO https://github.com/jhd3197/ServerKit/releases/latest/download/serverkit-agent_VERSION_amd64.deb | ||
| sudo dpkg -i serverkit-agent_VERSION_amd64.deb | ||
| sudo serverkit-agent register --token "YOUR_TOKEN" --server "https://your-serverkit.com" | ||
| sudo systemctl start serverkit-agent | ||
| ``` | ||
|
|
||
| **RHEL/CentOS/Fedora (.rpm):** | ||
| ```bash | ||
| sudo rpm -i https://github.com/serverkit/serverkit/releases/latest/download/serverkit-agent-VERSION-1.x86_64.rpm | ||
| sudo rpm -i https://github.com/jhd3197/ServerKit/releases/latest/download/serverkit-agent-VERSION-1.x86_64.rpm | ||
| sudo serverkit-agent register --token "YOUR_TOKEN" --server "https://your-serverkit.com" | ||
| sudo systemctl start serverkit-agent |
There was a problem hiding this comment.
The install commands here download and execute remote .deb/.rpm artifacts from GitHub without any checksum or signature verification, which creates a supply chain risk where a compromised GitHub account or release pipeline could deliver malicious code that users then install as root via dpkg -i/rpm -i. An attacker who can tamper with the release artifacts or the GitHub account would gain full control over systems that follow these instructions. Consider requiring users to verify a published checksum or GPG signature before installation, or distributing packages via a signed package repository instead of direct curl/rpm/dpkg invocations.
The backend was the one service that refused to live in a container — it ran on the host while everything else sat in Docker, connected by
host.docker.internaland good intentions. This PR gives it a proper home. Both services now run in a singledocker-compose.ymlwith a multi-stage frontend build, and all the old placeholder URLs and ghost repo references (serverkit/serverkit) have been exorcised.Technical changes
docker-compose.dev.ymlentirely.docker-compose.ymlnow runs both backend and frontend with health checks, log rotation, and named volumes. Backend exposed on:5849, frontend on:3847.backend/.dockerignore. Backend container uses SQLite at/app/instance/serverkit.dbwith volume mounts for data and config persistence.dist/to a multi-stage build (node:20-alpinebuilds,nginx:alpineserves). Updated.dockerignoreto stop excluding source files needed for the in-container build.frontend/nginx.confnow routes/apiand/socket.iotobackend:5000(Docker service name) instead ofhost.docker.internal:5000.backend/app/__init__.py: The/route and 404 handler now check whetherindex.htmlexists before attempting to serve it. When the backend runs standalone (no frontend assets), it returns a JSON response instead of crashing.serverkit/serverkittojhd3197/ServerKitinagent/Dockerfile,agent/README.md,agent/packaging/deb/build.sh,agent/packaging/rpm/build.sh,scripts/install.sh, andscripts/install.ps1.backend/app/api/servers.pynow replaces theGITHUB_REPOplaceholder in addition to the server URL when serving install scripts.frontend/src/pages/Servers.jsxuseswindow.location.originfor thecurl/irmdownload URL instead of the hardcodedhttps://your-serverplaceholder, and aligns parameter names (--serverinstead of--server-url,-Serverinstead of-ServerUrl).3.0.0->3.1.3, Werkzeug3.1.5->3.1.6.scripts/dev/dev.bat: Simplified to use the singledocker-compose.yml, updated port references to match new mapping.