Homelab inventory service that collects infrastructure data from multiple sources and provides a unified view through a web UI, JSON API, and MCP server.
Homelib aggregates hosts, services, networks, and capacity data from Proxmox, Tailscale, Hetzner Cloud, Komodo, and UniFi into a single SQLite database. It runs as a Tailscale node via tsnet — no reverse proxy or auth layer needed.
Note: This is a personal side project built for my own homelab. It's published as-is and may not see regular updates or active maintenance.
- Multi-source collection — Proxmox (SSH), Tailscale (Local + Control Plane API), Hetzner Cloud, Komodo, UniFi
- MCP server — 13 tools for AI/agent integration via Model Context Protocol, usable from Claude, Claude Code, or any MCP client
- Web UI — Dashboard, host browser, services, networks, Tailscale ACL viewer, capacity planner
- JSON API — Full REST API with filtering and search
- Host merging — Deduplicates and merges hosts discovered from multiple sources
- Cross-referencing — Validates zone assignments across sources, generates findings for mismatches
- Capacity planning — Per-node CPU/memory allocation tracking with free capacity and zone aggregates
- Role enrichment — Tag hosts with infrastructure roles and application categories via config
- Plugin system — Extend collection with custom scripts (local or SSH) that output JSON
- Scheduled collection — Cron-based with configurable retention
- Graceful degradation — Failed collectors don't block others
| Hosts | Capacity | Tailscale ACLs |
|---|---|---|
![]() |
![]() |
![]() |
Homelib exposes a Model Context Protocol server at /mcp (Streamable HTTP). This lets AI assistants query your infrastructure directly — ask about capacity, look up hosts, search across your inventory, or trigger collections.
| Tool | Description |
|---|---|
list_hosts |
List hosts with filtering |
get_host |
Host details by name |
list_services |
Docker services, filter by host/stack |
list_networks |
UniFi networks/VLANs |
get_acl_policy |
Tailscale ACL policy |
get_dns_config |
Tailscale DNS configuration |
get_routes |
Tailscale subnet routes and exit nodes |
list_findings |
Infrastructure findings by source/severity |
get_summary |
High-level inventory stats |
search_inventory |
Free-text search across all data |
get_collection_status |
Current/latest collection status |
trigger_collection |
Start a new collection |
get_capacity |
Capacity report by node/zone |
To use homelib's MCP server with Claude.ai or other remote MCP clients, see tsmcp — a gateway that exposes tsnet-based MCP servers (like homelib) over the internet with OAuth authentication.
For Claude Code or other local MCP clients on your Tailscale network, point them directly at https://<hostname>.your-tailnet.ts.net/mcp.
Homelib joins your Tailscale network as its own node via tsnet. On first start, it needs a Tailscale auth key to register. After that, tsnet persists its state and reconnects automatically.
Generate a reusable auth key in the Tailscale admin console. Set it as ts_auth_key in your config or as the HOMELIB_TS_AUTH_KEY environment variable.
cp config.example.yaml config.yamlEdit config.yaml — enable the collectors you need and configure secrets for their APIs. See Configuration for details.
services:
homelib:
image: meltforce/homelib:latest
restart: unless-stopped
volumes:
- ./data:/data
environment:
- HOMELIB_TS_AUTH_KEY=tskey-auth-xxxxx # only needed on first startPlace your config.yaml in ./data/config.yaml, then:
docker compose up -dOnce homelib has registered with Tailscale, you can remove the HOMELIB_TS_AUTH_KEY variable.
docker run -d \
--name homelib \
--restart unless-stopped \
-v $(pwd)/data:/data \
-e HOMELIB_TS_AUTH_KEY=tskey-auth-xxxxx \
meltforce/homelib:latestgo build -o homelib .
# Development (localhost:8080, no Tailscale)
./homelib --local --config config.yaml
# Production (tsnet)
export HOMELIB_TS_AUTH_KEY=tskey-auth-xxxxx
./homelib --config config.yamlOnce running, homelib is available at https://<hostname>.your-tailnet.ts.net (the hostname from your config, default homelib). No port forwarding or reverse proxy needed — access is controlled by your Tailscale ACLs.
Copy config.example.yaml and edit to match your environment. All secrets are resolved through a chain:
- Environment variable
HOMELIB_<UPPER_KEY>(e.g.HOMELIB_HETZNER_API_TOKEN) - Setec secret store (if
secret_backend.type: setec) - 1Password CLI
op://references (if value starts withop://) - Literal value
| Collector | Source | Data Collected |
|---|---|---|
| Tailscale | tsnet Local API + Control Plane API | Devices, online status, ACLs, DNS config, subnet routes |
| Proxmox | SSH via Tailscale | Nodes, VMs, LXC containers, CPU/memory/disk, status |
| Hetzner | Cloud API | Servers, specs, pricing, firewalls |
| Komodo | API | Docker stacks, containers, images |
| UniFi | Controller API | VLANs, subnets, DHCP, network devices |
The Tailscale collector is always active — homelib runs on Tailscale via tsnet and uses the Local API as its primary data source. The other collectors can be independently enabled/disabled. All collectors run concurrently.
Enrich discovered hosts with application and category metadata via the roles section. Application and category are displayed in the hosts table, capacity page, and host detail view.
roles:
application_categories:
jellyfin: media-server
immich: photos
vaultwarden: security
guest_overrides:
my-vm: homeassistant # when hostname != application name
tailscale_devices:
my-nas:
role: fileserver
application: truenas
proxmox_nodes:
my-node:
infrastructure_role: hypervisor
workload_specialization: generalExtend homelib with scripts that return JSON. Plugins run locally or via SSH and can contribute hosts, findings, and metrics.
plugins:
- name: my-plugin
enabled: true
type: ssh # or "local"
host: my-server
user: root
command: /usr/local/bin/my-script --json
timeout: 30s
schedule: defaultPlugin output schema:
{
"plugin": "my-plugin",
"version": "1.0",
"hosts": [
{ "name": "host1", "host_type": "vm", "details": {} }
],
"metrics": { "metric_name": "value" },
"findings": [
{ "severity": "warning", "host_name": "host1", "message": "High memory usage" }
]
}All endpoints are under /api/v1/. Responses are JSON.
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/v1/hosts |
List hosts (filters: source, zone, status, type, q) |
| GET | /api/v1/hosts/{name} |
Host details with associated services |
| GET | /api/v1/services |
List services (filters: host, stack) |
| GET | /api/v1/networks |
List networks/VLANs |
| GET | /api/v1/findings |
List findings (filters: source, severity) |
| GET | /api/v1/summary |
Inventory statistics |
| GET | /api/v1/capacity |
Capacity planning report |
| GET | /api/v1/collections |
Collection run history |
| POST | /api/v1/collections/trigger |
Trigger a collection run |
config.yaml
|
v
main.go
|-- Store (SQLite, WAL mode)
|-- Orchestrator
| |-- Collectors (Proxmox, Tailscale, Hetzner, Komodo, UniFi)
| |-- Plugins (custom scripts)
| |-- Merge (deduplicate hosts across sources)
| |-- Crossref (validate zones, enrich roles)
| '-- Persist results
|-- Scheduler (cron)
|-- HTTP Server
| |-- Web UI (embedded templates)
| '-- JSON API
'-- MCP Server (/mcp)
main.go Entry point, tsnet, HTTP server, scheduler
internal/
config/ YAML config, secret resolution
model/ Data types (Host, Service, Network, Finding, etc.)
collector/ Collector interface + implementations
crossref/ Zone validation, role enrichment
store/ SQLite persistence (WAL mode)
capacity/ Capacity planning calculations
scheduler/ Cron scheduling
server/ HTTP handlers (Web UI + JSON API)
mcp/ MCP server (Streamable HTTP)
web/ Embedded templates + static assets



