Skip to content

justinbehncodes/kubey

Kubey

See all your Kubernetes clusters in one place. No kubectl required.

Kubey is a web-based Kubernetes dashboard that gives your entire team visibility into your infrastructure—without requiring everyone to configure kubectl, manage kubeconfigs, or learn CLI commands.

License Go Version Node Version


Why Kubey?

The Problem

Your team manages multiple Kubernetes clusters across environments—dev, staging, production, maybe multiple regions. When something goes wrong or a deployment looks different than expected, you're left asking:

  • "Is this running the same version in prod as staging?"
  • "Can someone with cluster access check if the pods are healthy?"
  • "Why does this deployment have different replicas in EU vs US?"

Everyone scrambles for kubectl access, juggles kubeconfig files, and context-switches between terminals. Developers without cluster access wait for someone else to check.

The Solution

Kubey puts your clusters in a browser. Share a single URL with your team and everyone can:

  • See cluster health at a glance — Nodes, pods, deployments, services across all your clusters
  • Compare deployments across environments — Instantly spot configuration drift between dev/staging/prod
  • View pod logs — Debug issues without SSH access or kubectl
  • No local setup required — Works from any browser, any device

One engineer configures the clusters once. The entire team gets visibility forever.


Key Features

Cross-Cluster Deployment Comparison

The killer feature. Select any deployment and compare it across all your clusters simultaneously:

  • See which clusters are running which image versions
  • Spot replica count differences between environments
  • Identify missing environment variables or config changes
  • Compare resource limits and requests

"Why is production running v2.3.1 but staging is on v2.4.0?" — Answer that in seconds, not minutes.

Web-Based Access for Everyone

  • Developers — Check deployment status without cluster credentials
  • QA Teams — Verify which version is deployed where
  • On-Call Engineers — Quick cluster overview from your phone at 2am
  • Product Managers — See what's actually running in production

No kubectl. No kubeconfig. No VPN gymnastics. Just open a URL.

Team Authentication Built-In

  • Sign in with Google or GitHub
  • Admin controls who can access which clusters
  • Restrict signups to your company domain (@yourcompany.com)
  • Invite-only mode for tight access control

Real-Time Cluster Monitoring

  • Live CPU and memory metrics for nodes and pods
  • Pod status and restart counts
  • Service and ingress overview
  • Namespace-organized resource views

Self-Hosted & Secure

  • Runs in your infrastructure — your data never leaves your network
  • Single binary with embedded database — no external dependencies
  • Mount your existing kubeconfig — works with any cluster you can already access
  • Lightweight enough to run on a Raspberry Pi, powerful enough for enterprise

Quick Start

Docker (Recommended)

# Single-user mode (no authentication)
docker run -d \
  --name kubey \
  -p 8080:8080 \
  -v ~/.kube/config:/home/kubey/.kube/config:ro \
  -v kubey-data:/data \
  jboocodes/kubey:latest

Open http://localhost:8080 in your browser.

Docker with Team Authentication

docker run -d \
  --name kubey \
  -p 8080:8080 \
  -v ~/.kube/config:/home/kubey/.kube/config:ro \
  -v kubey-data:/data \
  -e AUTH_MODE=shared \
  -e JWT_SECRET=your-secure-secret-key \
  -e GITHUB_CLIENT_ID=your-github-client-id \
  -e GITHUB_CLIENT_SECRET=your-github-secret \
  jboocodes/kubey:latest

Installation

Docker Compose

Create a docker-compose.yml:

version: '3.8'

services:
  kubey:
    image: jboocodes/kubey:latest
    container_name: kubey
    ports:
      - "8080:8080"
    volumes:
      - ~/.kube/config:/home/kubey/.kube/config:ro
      - kubey-data:/data
    environment:
      - AUTH_MODE=single
      # For team mode, uncomment:
      # - AUTH_MODE=shared
      # - JWT_SECRET=your-secure-secret
      # - GITHUB_CLIENT_ID=xxx
      # - GITHUB_CLIENT_SECRET=xxx
    restart: unless-stopped

volumes:
  kubey-data:
docker-compose up -d

Kubernetes

Deploy Kubey to your cluster:

# kubey.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kubey
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubey
  namespace: kubey
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubey-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: kubey
    namespace: kubey
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubey-data
  namespace: kubey
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubey
  namespace: kubey
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubey
  template:
    metadata:
      labels:
        app: kubey
    spec:
      serviceAccountName: kubey
      containers:
        - name: kubey
          image: jboocodes/kubey:latest
          ports:
            - containerPort: 8080
          env:
            - name: AUTH_MODE
              value: "single"
          volumeMounts:
            - name: data
              mountPath: /data
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 30
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: kubey-data
---
apiVersion: v1
kind: Service
metadata:
  name: kubey
  namespace: kubey
spec:
  selector:
    app: kubey
  ports:
    - port: 80
      targetPort: 8080
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubey
  namespace: kubey
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  rules:
    - host: kubey.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubey
                port:
                  number: 80
  tls:
    - hosts:
        - kubey.example.com
      secretName: kubey-tls
kubectl apply -f kubey.yaml

Helm

# Install with default values (single-user mode)
helm install kubey oci://ghcr.io/justinbehncodes/charts/kubey --version 0.1.0 -n kubey

# Install with team authentication
helm install kubey kubey/kubey \
  --set auth.mode=shared \
  --set auth.jwtSecret=your-secure-secret \
  --set auth.github.clientId=xxx \
  --set auth.github.clientSecret=xxx

Configuration

Environment Variables

Variable Default Description
AUTH_MODE single Authentication mode: single, shared, or hybrid
JWT_SECRET (generated) Secret key for JWT tokens (required in production)
PORT 8080 Server port
HOST localhost Server host
DATABASE_PATH ./data/kubey.db SQLite database path
KUBECONFIG ~/.kube/config Path to kubeconfig file
STATIC_PATH (embedded) Path to static files (for Docker)

OAuth Configuration

Variable Description
GITHUB_CLIENT_ID GitHub OAuth App Client ID
GITHUB_CLIENT_SECRET GitHub OAuth App Client Secret
GOOGLE_CLIENT_ID Google OAuth Client ID
GOOGLE_CLIENT_SECRET Google OAuth Client Secret
OAUTH_CALLBACK_URL Base URL for OAuth callbacks (e.g., https://kubey.example.com)
ADMIN_EMAIL Pre-define admin user by email

Admin Settings

Variable Description
ALLOWED_DOMAINS Comma-separated list of allowed email domains for signup
INVITE_ONLY When true, users must be pre-created by admin

Authentication Modes

Kubey supports three authentication modes:

Single User Mode (AUTH_MODE=single)

  • No authentication required
  • Perfect for personal use or local development
  • All clusters accessible to anyone with access to the dashboard

Shared Mode (AUTH_MODE=shared)

  • OAuth authentication required (GitHub/Google)
  • Admin manages all clusters centrally
  • First user to sign in becomes admin (or set ADMIN_EMAIL)
  • Clusters mounted at startup are shared with all users
  • Admin can restrict signups by domain or enable invite-only mode

Hybrid Mode (AUTH_MODE=hybrid)

  • OAuth authentication required
  • Admin-managed shared clusters + personal clusters
  • Users can add their own kubeconfigs
  • Combines centralized management with personal flexibility

Setting Up OAuth

GitHub OAuth

  1. Go to GitHub Settings > Developer settings > OAuth Apps > New OAuth App
  2. Set Homepage URL: https://kubey.example.com
  3. Set Authorization callback URL: https://kubey.example.com/auth/callback/github
  4. Copy Client ID and Client Secret

Google OAuth

  1. Go to Google Cloud Console > APIs & Services > Credentials
  2. Create OAuth 2.0 Client ID (Web application)
  3. Add Authorized redirect URI: https://kubey.example.com/auth/callback/google
  4. Copy Client ID and Client Secret

API Reference

Public Endpoints

Method Endpoint Description
GET /health Health check
GET /api/mode Get authentication mode

Protected Endpoints

Method Endpoint Description
GET /api/me Get current user
GET /api/clusters List all clusters
GET /api/clusters/:id Get cluster details
GET /api/clusters/:id/nodes Get cluster nodes
GET /api/clusters/:id/pods Get all pods
GET /api/clusters/:id/services Get all services
GET /api/clusters/:id/deployments Get all deployments
GET /api/clusters/:id/namespaces Get namespaces with resources
GET /api/clusters/:id/pods/:ns/:name/logs Get pod logs
GET /api/compare/deployments Compare deployments across clusters

Admin Endpoints

Method Endpoint Description
GET /api/admin/users List all users
POST /api/admin/users Create user
PATCH /api/admin/users/:id Update user
DELETE /api/admin/users/:id Delete user
GET /api/admin/clusters List all clusters
PATCH /api/admin/clusters/:id Update cluster
DELETE /api/admin/clusters/:id Delete cluster
GET /api/admin/settings Get settings
PATCH /api/admin/settings Update settings

Development

Prerequisites

  • Go 1.24+
  • Node.js 20+
  • npm or bun

Clone and Setup

git clone https://github.com/justinbehncodes/kubey.git
cd kubey

Frontend Development

cd web
npm install
npm run dev      # http://localhost:5173

Backend Development

cd api
go mod download
go run cmd/api/main.go

Run Both Concurrently

make dev

Build Docker Image

docker build -t kubey:local .

Run Tests

# Backend tests
cd api && go test ./...

# With gotestsum for cleaner output
make test

Architecture

kubey/
├── api/                    # Go backend
│   ├── cmd/api/            # Entry point
│   ├── internal/
│   │   ├── auth/           # Authentication (JWT, OAuth)
│   │   ├── database/       # SQLite + migrations
│   │   ├── handlers/       # HTTP handlers
│   │   ├── middlewares/    # Logging, CORS, recovery
│   │   └── services/       # Kubernetes client
│   └── go.mod
├── web/                    # React frontend
│   ├── src/
│   │   ├── components/     # UI components
│   │   ├── pages/          # Page components
│   │   ├── services/       # API clients
│   │   └── contexts/       # React contexts
│   └── package.json
├── Dockerfile              # Multi-stage build
└── docker-compose.yml

Tech Stack

Backend:

  • Go with Gin web framework
  • Pure-Go SQLite (no CGO required)
  • client-go for Kubernetes API
  • JWT authentication

Frontend:

  • React 19 with TypeScript
  • Vite for build tooling
  • Tailwind CSS v4
  • shadcn/ui components

Troubleshooting

Cannot connect to clusters

  1. Ensure your kubeconfig is mounted correctly
  2. Check cluster connectivity: kubectl cluster-info
  3. Verify RBAC permissions for the service account

OAuth not working

  1. Verify callback URLs match exactly
  2. Check client ID/secret are correct
  3. Ensure OAUTH_CALLBACK_URL is set correctly

Database errors

  1. Check /data volume is writable
  2. Ensure sufficient disk space
  3. Try removing the database to reset: rm /data/kubey.db

Contributing

Contributions are welcome! Please read our contributing guidelines before submitting PRs.

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Support