See your infrastructure. Zero Config.
Point graph-go at your stack and get a live, interactive map of every database, table, service, and storage bucket — with real-time health monitoring.
graph-go auto-discovers your infrastructure by connecting to the Docker daemon, inspecting running containers, and probing databases and storage services. No manual inventory needed — it builds the graph for you.
| Capability | Details |
|---|---|
| Auto-discovery | Detects infrastructure from Docker containers and Kubernetes clusters — no manual inventory needed |
| Kubernetes | Namespaces, Deployments, StatefulSets, DaemonSets, Pods, Services — with informer-based real-time watching |
| Docker | Classifies running containers, extracts credentials, watches Docker events for live topology changes |
| PostgreSQL | Tables, foreign key relationships, schema topology |
| MongoDB | Databases and collections |
| MySQL | Tables, foreign key relationships |
| Redis | Keyspaces and key distribution |
| Elasticsearch | Indices, cluster health, shard status |
| S3 / MinIO | Buckets and top-level prefixes |
| HTTP services | Health endpoints, dependency mapping between services |
| Real-time health | WebSocket-powered live status updates every 5 seconds |
| Interactive graph | Swimlane layout, namespace group containers, pan/zoom, filter by type/health, search nodes |
Boots a seeded stack (Postgres, Mongo, MinIO, mock services) so the graph populates immediately:
git clone https://github.com/guilherme-grimm/graph-go.git
cd graph-go
make docker-upOpen http://localhost:8080 — single URL, single port. Stop with make docker-down.
One container, one port. Mount the Docker socket read-only and graph-go auto-discovers everything running on the host:
docker run -d -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
ghcr.io/guilherme-grimm/graph-go:latestgraph-go only reads from the Docker socket. The
:roflag enforces this — keep it.
Open http://localhost:8080. Auto-discovery handles Docker containers and (when a kubeconfig or in-cluster service account is present) Kubernetes resources without any config file.
For services that live outside Docker/Kubernetes (remote databases, managed cloud services), mount a config file — see Configuration.
Single self-contained binary — UI is embedded.
# Linux amd64
curl -sL https://github.com/guilherme-grimm/graph-go/releases/latest/download/graph-go_linux_amd64.tar.gz | tar xz
./graph-goOpen http://localhost:8080. Other platforms on the Releases page.
| Port | Purpose |
|---|---|
8080 |
graph-go (UI + API + WebSocket — production) |
5173 |
Vite dev server (development only — see CONTRIBUTING.md) |
9001 |
MinIO console (demo stack only) |
Auto-discovery is the path. Mount the Docker socket and/or run inside a Kubernetes cluster — graph-go discovers your infrastructure with no config file needed.
Use the YAML config (conf/config.yaml) only as an escape hatch for services that aren't reachable via discovery — remote databases, managed cloud services, external endpoints. See conf/config.sample.yaml for the full schema and examples for every adapter.
To use a config file with the Docker run above:
docker run -d -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v $(pwd)/conf/config.yaml:/app/conf/config.yaml:ro \
ghcr.io/guilherme-grimm/graph-go:latestAuthorized use only: graph-go is for visualizing infrastructure you own or have permission to access. Do not point it at systems without authorization.
┌─────────────────────────────────────┐
│ Discoverer Interface │
│ Discover() · Watch() · Close() │
└──────────┬──────────┬───────────────┘
│ │
┌──────────▼──┐ ┌────▼──────────────┐
│ Docker │ │ Kubernetes │
│ Discoverer │ │ Discoverer │
│ (containers,│ │ (informers, pods, │
│ classify, │ │ deployments, │
│ events) │ │ services, health) │
└──────┬──────┘ └────┬──────────────┘
│ │
┌──────▼───────────────▼──────┐
│ Parallel Discovery + Merge │
│ (concatenate ServiceInfo) │
└──────────────┬──────────────┘
│
Config (YAML) ──→ YAML Merge ───────────▶│
▼
┌─────────────────────────────┐
│ Adapter Registry │
│ ├─ PostgreSQL → Tables + FK│
│ ├─ MongoDB → Collections │
│ ├─ MySQL → Tables + FK │
│ ├─ Redis → Keyspaces │
│ ├─ Elasticsearch → Indices │
│ ├─ S3 → Buckets │
│ └─ HTTP → Health + deps│
│ │
│ + Topology (K8s nodes/edges) │
└──────────────┬───────────────┘
▼
Graph Model (Nodes + Edges)
▼
REST API + WebSocket (Real-time)
Key Components:
- Discoverer Interface: Uniform contract (
Discover,Watch,Close) for all discovery backends — Docker and Kubernetes run in parallel, results are concatenated - Docker Discovery: Inspects containers, classifies images, extracts credentials from env vars, watches Docker events for live topology changes
- Kubernetes Discovery: Uses client-go informers with debounced event handling; discovers Namespaces, Deployments, StatefulSets, DaemonSets, Pods, and Services with health mapping
- Adapters: Implement the
Adapterinterface to probe databases and storage services - Registry: Manages adapters and topology sets, creates service-level parent nodes, aggregates graph data
- Cache: 30-second TTL with singleflight pattern to prevent thundering herd
- WebSocket: Streams health updates every 5 seconds
- Swimlane Layout: Namespace-aware layout with zone classification (system, infra, application namespaces)
- Group Containers: K8s namespaces render as collapsible bounding boxes via React Flow grouping
- Node Inspector: Side panel showing detailed metadata and connections
- WebSocket Hook: Real-time health updates without polling
Adapter-discovered:
Service Node (postgres/mongodb/s3)
└─ Database/Bucket Node
└─ Table/Collection/Prefix Node
Kubernetes-discovered:
Namespace (group container)
└─ Deployment / StatefulSet / DaemonSet
└─ Pod
└─ K8sService ──routes_to──→ Pod
Edges represent relationships (contains, foreign_key, routes_to, etc.).
Backend:
- Go 1.25.6
- gorilla/mux (HTTP routing)
- k8s.io/client-go (Kubernetes discovery + informers)
- pgxpool (PostgreSQL)
- mongo-driver v2 (MongoDB)
- go-sql-driver/mysql (MySQL)
- go-redis/v9 (Redis)
- go-elasticsearch/v8 (Elasticsearch)
- AWS SDK v2 (S3)
- coder/websocket (WebSocket)
- testcontainers-go (integration tests)
Frontend:
- TypeScript
- React 18
- React Flow (graph visualization)
- Vite (build tool)
Infrastructure:
- Docker + Docker Compose
- PostgreSQL 17
- MongoDB 7
- MySQL 8
- Redis 7
- Elasticsearch 8
- MinIO (S3-compatible)
cd binary && go test ./...Runs without Docker. Includes pure function tests and HTTP handler tests.
cd binary && go test -tags=integration -v -timeout=5m ./internal/adapters/...Requires Docker. Uses testcontainers-go to spin up real database instances (PostgreSQL, MongoDB, MySQL, Redis, Elasticsearch, MinIO) — no mocks.
Every adapter runs through the contract test suite (adaptertest.RunContractTests) which validates:
- Connect/disconnect lifecycle
- Node/edge discovery (unique IDs, valid parent refs, correct types)
- Health metrics (status key, required keys)
Run a single adapter's tests:
cd binary && go test -tags=integration -v ./internal/adapters/redis/make test # unit + type-check
cd binary && go test -tags=integration -timeout=5m ./internal/adapters/... # integrationReturns the full infrastructure graph (nodes + edges).
Response:
{
"data": {
"nodes": [
{
"id": "service-postgres",
"type": "postgres",
"name": "postgres",
"metadata": { "adapter": "postgres" },
"health": "healthy"
}
],
"edges": [
{
"id": "edge-1",
"source": "service-postgres",
"target": "pg-mydb",
"type": "contains",
"label": "contains"
}
]
}
}Returns details for a specific node.
Returns adapter health status (ok/degraded/error).
Streams real-time health updates.
Message format:
{
"type": "health_update",
"nodeId": "postgres",
"status": "healthy",
"timestamp": "2026-02-09T10:30:00Z"
}- Create adapter package in
binary/internal/adapters/{name}/ - Implement the
Adapterinterface:type Adapter interface { Connect(config ConnectionConfig) error Discover() ([]nodes.Node, []edges.Edge, error) Health() (HealthMetrics, error) Close() error }
- Self-register via
init()withadapters.RegisterFactory("name", ...) - Add integration tests (required) — create
{name}_integration_test.gowith:- Build tag
//go:build integration TestMainusing testcontainers-go to start a real instance- Seed representative data
- Call
adaptertest.RunContractTeststo validate the interface contract - Add adapter-specific tests (filtering, ID format, metadata, etc.)
- Build tag
- Import adapter in
binary/internal/server/server.go(blank import forinit()) - Add node type in
binary/internal/graph/nodes/nodes.go - Update frontend types in
webui/src/types/graph.ts - Add icon in
webui/src/components/graph/CustomNode.tsx
Discoverers live in binary/internal/discovery/{name}/ and implement the Discoverer interface:
type Discoverer interface {
Name() string
Discover(ctx context.Context) ([]ServiceInfo, error)
Watch(ctx context.Context, onChange func()) error
Close() error
}- Create discoverer package in
binary/internal/discovery/{name}/ - Implement the
Discovererinterface — return[]ServiceInfofromDiscover(). Topology-producing discoverers (like K8s) populateNodes/Edgesdirectly; adapter-oriented ones (like Docker) populateConfigfor adapter bridging. - Wire into server in
binary/internal/server/server.go— add abuild{Name}Discovery()function and call it alongside the existing discoverers. - Add integration tests with
//go:build integration— use real infrastructure (kind/k3d for K8s, testcontainers for others). No mocks.
See CONTRIBUTING.md for detailed guidance.
We welcome contributions! See CONTRIBUTING.md for guidelines on:
- Development setup
- Code style conventions
- How to add new adapters
- Submitting pull requests
Intended Use:
- Visualizing and monitoring infrastructure you own or have authorization to access
- DevOps dashboards and topology mapping
- Infrastructure documentation and onboarding
- Exploring database schemas and relationships
Not Intended For:
- Unauthorized system scanning or reconnaissance
- Security testing without explicit permission
- Accessing systems you don't own or control
Users are responsible for ensuring they have proper authorization before connecting graph-go to any infrastructure.
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
See the LICENSE file for details. AGPL requires that modified versions used over a network must also be open-sourced.
The project uses GitHub Actions for continuous integration and automated releases.
- CI runs on every push/PR to
main— backend unit tests, integration tests (testcontainers), and frontend build - Releases are triggered by version tags (
v*) and produce:- Cross-platform binaries (Linux, macOS, Windows) via GoReleaser
- Single Docker image pushed to
ghcr.io/guilherme-grimm/graph-go
To create a release:
git tag v0.1.0
git push --tags- Docker auto-discovery
- HTTP service health monitoring
- MySQL adapter
- Redis adapter
- Elasticsearch adapter
- Integration tests with testcontainers-go (all adapters)
- Contract test suite for adapter interface compliance
- Discoverer interface (pluggable discovery backends)
- Kubernetes orchestrator (Namespaces, Deployments, StatefulSets, DaemonSets, Pods, Services)
- Informer-based real-time K8s watching with debounce
- Swimlane layout with namespace group containers
- K8s adapter bridging (classify pods by image, connect adapters to databases in pods)
- Flow observability (real-time data flow visualization)
- Integrated stress trigger (k6 with real-time impact visualization)
- Kafka adapter
- Additional orchestrators (ECS, Nomad)
- Graph persistence (save/load views)
- Multi-region visualization
- Alert configuration per node
- Issues: github.com/guilherme-grimm/graph-go/issues
- Discussions: github.com/guilherme-grimm/graph-go/discussions
Built with ❤️ for DevOps and infrastructure engineers
