See all your Kubernetes clusters in one place. No kubectl required.
Kubey is a web-based Kubernetes dashboard that gives your entire team visibility into your infrastructure—without requiring everyone to configure kubectl, manage kubeconfigs, or learn CLI commands.
Your team manages multiple Kubernetes clusters across environments—dev, staging, production, maybe multiple regions. When something goes wrong or a deployment looks different than expected, you're left asking:
- "Is this running the same version in prod as staging?"
- "Can someone with cluster access check if the pods are healthy?"
- "Why does this deployment have different replicas in EU vs US?"
Everyone scrambles for kubectl access, juggles kubeconfig files, and context-switches between terminals. Developers without cluster access wait for someone else to check.
Kubey puts your clusters in a browser. Share a single URL with your team and everyone can:
- See cluster health at a glance — Nodes, pods, deployments, services across all your clusters
- Compare deployments across environments — Instantly spot configuration drift between dev/staging/prod
- View pod logs — Debug issues without SSH access or kubectl
- No local setup required — Works from any browser, any device
One engineer configures the clusters once. The entire team gets visibility forever.
The killer feature. Select any deployment and compare it across all your clusters simultaneously:
- See which clusters are running which image versions
- Spot replica count differences between environments
- Identify missing environment variables or config changes
- Compare resource limits and requests
"Why is production running v2.3.1 but staging is on v2.4.0?" — Answer that in seconds, not minutes.
- Developers — Check deployment status without cluster credentials
- QA Teams — Verify which version is deployed where
- On-Call Engineers — Quick cluster overview from your phone at 2am
- Product Managers — See what's actually running in production
No kubectl. No kubeconfig. No VPN gymnastics. Just open a URL.
- Sign in with Google or GitHub
- Admin controls who can access which clusters
- Restrict signups to your company domain (
@yourcompany.com) - Invite-only mode for tight access control
- Live CPU and memory metrics for nodes and pods
- Pod status and restart counts
- Service and ingress overview
- Namespace-organized resource views
- Runs in your infrastructure — your data never leaves your network
- Single binary with embedded database — no external dependencies
- Mount your existing kubeconfig — works with any cluster you can already access
- Lightweight enough to run on a Raspberry Pi, powerful enough for enterprise
# Single-user mode (no authentication)
docker run -d \
--name kubey \
-p 8080:8080 \
-v ~/.kube/config:/home/kubey/.kube/config:ro \
-v kubey-data:/data \
jboocodes/kubey:latestOpen http://localhost:8080 in your browser.
docker run -d \
--name kubey \
-p 8080:8080 \
-v ~/.kube/config:/home/kubey/.kube/config:ro \
-v kubey-data:/data \
-e AUTH_MODE=shared \
-e JWT_SECRET=your-secure-secret-key \
-e GITHUB_CLIENT_ID=your-github-client-id \
-e GITHUB_CLIENT_SECRET=your-github-secret \
jboocodes/kubey:latestCreate a docker-compose.yml:
version: '3.8'
services:
kubey:
image: jboocodes/kubey:latest
container_name: kubey
ports:
- "8080:8080"
volumes:
- ~/.kube/config:/home/kubey/.kube/config:ro
- kubey-data:/data
environment:
- AUTH_MODE=single
# For team mode, uncomment:
# - AUTH_MODE=shared
# - JWT_SECRET=your-secure-secret
# - GITHUB_CLIENT_ID=xxx
# - GITHUB_CLIENT_SECRET=xxx
restart: unless-stopped
volumes:
kubey-data:docker-compose up -dDeploy Kubey to your cluster:
# kubey.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kubey
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubey
namespace: kubey
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubey-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kubey
namespace: kubey
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kubey-data
namespace: kubey
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubey
namespace: kubey
spec:
replicas: 1
selector:
matchLabels:
app: kubey
template:
metadata:
labels:
app: kubey
spec:
serviceAccountName: kubey
containers:
- name: kubey
image: jboocodes/kubey:latest
ports:
- containerPort: 8080
env:
- name: AUTH_MODE
value: "single"
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: kubey-data
---
apiVersion: v1
kind: Service
metadata:
name: kubey
namespace: kubey
spec:
selector:
app: kubey
ports:
- port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubey
namespace: kubey
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: kubey.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubey
port:
number: 80
tls:
- hosts:
- kubey.example.com
secretName: kubey-tlskubectl apply -f kubey.yaml# Install with default values (single-user mode)
helm install kubey oci://ghcr.io/justinbehncodes/charts/kubey --version 0.1.0 -n kubey
# Install with team authentication
helm install kubey kubey/kubey \
--set auth.mode=shared \
--set auth.jwtSecret=your-secure-secret \
--set auth.github.clientId=xxx \
--set auth.github.clientSecret=xxx| Variable | Default | Description |
|---|---|---|
AUTH_MODE |
single |
Authentication mode: single, shared, or hybrid |
JWT_SECRET |
(generated) | Secret key for JWT tokens (required in production) |
PORT |
8080 |
Server port |
HOST |
localhost |
Server host |
DATABASE_PATH |
./data/kubey.db |
SQLite database path |
KUBECONFIG |
~/.kube/config |
Path to kubeconfig file |
STATIC_PATH |
(embedded) | Path to static files (for Docker) |
| Variable | Description |
|---|---|
GITHUB_CLIENT_ID |
GitHub OAuth App Client ID |
GITHUB_CLIENT_SECRET |
GitHub OAuth App Client Secret |
GOOGLE_CLIENT_ID |
Google OAuth Client ID |
GOOGLE_CLIENT_SECRET |
Google OAuth Client Secret |
OAUTH_CALLBACK_URL |
Base URL for OAuth callbacks (e.g., https://kubey.example.com) |
ADMIN_EMAIL |
Pre-define admin user by email |
| Variable | Description |
|---|---|
ALLOWED_DOMAINS |
Comma-separated list of allowed email domains for signup |
INVITE_ONLY |
When true, users must be pre-created by admin |
Kubey supports three authentication modes:
- No authentication required
- Perfect for personal use or local development
- All clusters accessible to anyone with access to the dashboard
- OAuth authentication required (GitHub/Google)
- Admin manages all clusters centrally
- First user to sign in becomes admin (or set
ADMIN_EMAIL) - Clusters mounted at startup are shared with all users
- Admin can restrict signups by domain or enable invite-only mode
- OAuth authentication required
- Admin-managed shared clusters + personal clusters
- Users can add their own kubeconfigs
- Combines centralized management with personal flexibility
- Go to GitHub Settings > Developer settings > OAuth Apps > New OAuth App
- Set Homepage URL:
https://kubey.example.com - Set Authorization callback URL:
https://kubey.example.com/auth/callback/github - Copy Client ID and Client Secret
- Go to Google Cloud Console > APIs & Services > Credentials
- Create OAuth 2.0 Client ID (Web application)
- Add Authorized redirect URI:
https://kubey.example.com/auth/callback/google - Copy Client ID and Client Secret
| Method | Endpoint | Description |
|---|---|---|
| GET | /health |
Health check |
| GET | /api/mode |
Get authentication mode |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/me |
Get current user |
| GET | /api/clusters |
List all clusters |
| GET | /api/clusters/:id |
Get cluster details |
| GET | /api/clusters/:id/nodes |
Get cluster nodes |
| GET | /api/clusters/:id/pods |
Get all pods |
| GET | /api/clusters/:id/services |
Get all services |
| GET | /api/clusters/:id/deployments |
Get all deployments |
| GET | /api/clusters/:id/namespaces |
Get namespaces with resources |
| GET | /api/clusters/:id/pods/:ns/:name/logs |
Get pod logs |
| GET | /api/compare/deployments |
Compare deployments across clusters |
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/admin/users |
List all users |
| POST | /api/admin/users |
Create user |
| PATCH | /api/admin/users/:id |
Update user |
| DELETE | /api/admin/users/:id |
Delete user |
| GET | /api/admin/clusters |
List all clusters |
| PATCH | /api/admin/clusters/:id |
Update cluster |
| DELETE | /api/admin/clusters/:id |
Delete cluster |
| GET | /api/admin/settings |
Get settings |
| PATCH | /api/admin/settings |
Update settings |
- Go 1.24+
- Node.js 20+
- npm or bun
git clone https://github.com/justinbehncodes/kubey.git
cd kubeycd web
npm install
npm run dev # http://localhost:5173cd api
go mod download
go run cmd/api/main.gomake devdocker build -t kubey:local .# Backend tests
cd api && go test ./...
# With gotestsum for cleaner output
make testkubey/
├── api/ # Go backend
│ ├── cmd/api/ # Entry point
│ ├── internal/
│ │ ├── auth/ # Authentication (JWT, OAuth)
│ │ ├── database/ # SQLite + migrations
│ │ ├── handlers/ # HTTP handlers
│ │ ├── middlewares/ # Logging, CORS, recovery
│ │ └── services/ # Kubernetes client
│ └── go.mod
├── web/ # React frontend
│ ├── src/
│ │ ├── components/ # UI components
│ │ ├── pages/ # Page components
│ │ ├── services/ # API clients
│ │ └── contexts/ # React contexts
│ └── package.json
├── Dockerfile # Multi-stage build
└── docker-compose.yml
Backend:
- Go with Gin web framework
- Pure-Go SQLite (no CGO required)
- client-go for Kubernetes API
- JWT authentication
Frontend:
- React 19 with TypeScript
- Vite for build tooling
- Tailwind CSS v4
- shadcn/ui components
- Ensure your kubeconfig is mounted correctly
- Check cluster connectivity:
kubectl cluster-info - Verify RBAC permissions for the service account
- Verify callback URLs match exactly
- Check client ID/secret are correct
- Ensure
OAUTH_CALLBACK_URLis set correctly
- Check
/datavolume is writable - Ensure sufficient disk space
- Try removing the database to reset:
rm /data/kubey.db
Contributions are welcome! Please read our contributing guidelines before submitting PRs.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit your changes:
git commit -m 'Add amazing feature' - Push to the branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- GitHub Issues - Bug reports and feature requests
- Discussions - Questions and community support