Open source enterprise API catalog platform for effortless API discovery and governance
In enterprises with 200+ internal APIs, developers waste days or weeks searching for the right endpoint. Teams unknowingly build duplicate services because they can't discover what already exists. Breaking changes surprise consumers. API discovery is broken.
Perrache solves the API discovery crisis through automated ingestion, semantic search, and intelligent governance - all in a single platform that requires zero manual effort.
- Automatic CI/CD Ingestion: Webhook endpoint receives OpenAPI specs from any CI/CD pipeline - one line of config, zero maintenance
- Semantic Discovery: Search by concept, not keywords. Find "user profile data" across
userEmail,contactEmail,primaryEmail - Breaking Change Detection: Automatic spec diffing classifies changes (breaking/non-breaking) and notifies affected consumers
- Dependency Tracking: Track who consumes which endpoints at a granular level - know the impact before you deploy
- Landscape Visualization: Visual clustering reveals duplicate APIs and domain overlap across your entire API ecosystem
- Risk-Based Governance: Visibility-driven approach without blocking deployments
- Teams add one line to CI/CD:
POSTOpenAPI spec to Perrache webhook on deployment - Perrache generates embeddings: Routes, schemas, and metadata become semantically searchable
- Developers search by intent: "customer contact info" finds relevant endpoints across all services
- Breaking changes auto-detected: Spec comparison runs on every upload, consumers notified automatically
- Impact analysis: See exactly which services are affected by API changes
Existing tools fall short:
- Swagger UI: Lives in each repo, no centralized discovery
- Postman: Manual collection maintenance, no semantic search
- Backstage: Requires manual YAML catalog entries teams won't maintain
- Kong/3scale: Gateway-dependent, runtime overhead, keyword search only
Perrache is different:
- Zero manual effort (webhook-first automation)
- Platform-agnostic (works with any CI/CD)
- Zero runtime overhead (catalog-only, no gateway)
- Semantic intelligence (embeddings-based discovery)
- Open source (no vendor lock-in)
- Node.js: 20+ LTS (use nvm or fnm)
- pnpm: 8+ (
npm install -g pnpm) - PostgreSQL: 15+ with pgvector extension (for local development)
# Clone the repository
git clone https://github.com/brainrepo/perrache.git
cd perrache
# Use correct Node.js version
nvm use # or fnm use
# Install dependencies
pnpm install
# Start PostgreSQL database with pgvector
docker compose up -d
# Set up environment variables
cp .env.example .env
# Edit .env with your configuration
# Run database migrations
pnpm db:migrate
# Start development servers (API + Web)
pnpm devpnpm dev- Start all apps in development modepnpm build- Build all apps for productionpnpm test- Run tests across all workspacespnpm lint- Lint all codepnpm format- Format code with Prettierpnpm db:migrate- Apply database migrations
docker compose up -d- Start PostgreSQL with pgvector in backgrounddocker compose down- Stop database containerdocker compose logs -f postgres- View database logsdocker compose ps- Check container status
perrache/
├── apps/
│ ├── api/ # Fastify backend (port 3001)
│ └── web/ # Next.js frontend (port 3000)
├── packages/
│ ├── types/ # Shared TypeScript types
│ └── config/ # Shared ESLint/Prettier config
├── docs/ # Project documentation
└── bmad/ # BMad Method workflows
- Frontend: http://localhost:3000
- API: http://localhost:3001
- API Health: http://localhost:3001/health
- API Metrics: http://localhost:3001/metrics
- API Docs: http://localhost:3001/docs
The API includes production-grade observability features:
Structured Logging (Pino)
# Development mode automatically uses pino-pretty for readable logs
pnpm --filter @perrache/api dev
# Production logs are JSON format, written to stdout
# Log levels configurable via LOG_LEVEL environment variable
# Available levels: trace, debug, info, warn, error, fatal
LOG_LEVEL=debug pnpm --filter @perrache/api devLog output includes:
- Timestamp (ISO 8601)
- Correlation ID (
reqId) - HTTP method, URL, status code
- Response time in milliseconds
- Automatic redaction of sensitive data (auth headers, passwords, API keys)
Prometheus Metrics
# Scrape metrics endpoint
curl http://localhost:3001/metrics
# Metrics include:
# - http_requests_total (counter with method/route/status_code labels)
# - http_request_duration_seconds (histogram with p50/p95/p99 buckets)
# - Default Node.js metrics (CPU, memory, event loop)Example Prometheus scrape config:
scrape_configs:
- job_name: 'perrache-api'
static_configs:
- targets: ['localhost:3001']
metrics_path: '/metrics'Health Monitoring
# Health check endpoint
curl http://localhost:3001/health
# Returns:
# {
# "status": "healthy",
# "timestamp": "2025-11-15T12:00:00.000Z",
# "uptime": 3600,
# "services": { "database": "healthy" },
# "version": "0.1.0"
# }Request Correlation
All responses include X-Request-ID header for distributed tracing:
curl -I http://localhost:3001/health
# X-Request-ID: req_1731657600000_abc123xyzUse this ID to correlate logs across services and debug request flows.
# GitHub Actions example
- name: Upload OpenAPI spec to Perrache
run: |
curl -X POST https://perrache.yourorg.com/api/v1/specs/openapi \
-H "Authorization: Bearer ${{ secrets.PERRACHE_API_KEY }}" \
-H "Content-Type: application/json" \
-d @openapi.json- CI/CD webhook integration for automatic spec ingestion
- Semantic search with embeddings-based relationship discovery
- Two-tier subscription model (person + endpoint subscribers)
- Automatic breaking change detection with impact analysis
- Risk-based governance (visibility without blocking)
- Environment tracking (dev/staging/prod spec versions)
- Change proposal workflow with consumer feedback
- Visual landscape clustering (HDBSCAN + UMAP) for duplication detection
- API design editor with semantic suggestions
- Per-endpoint Q&A knowledge base
Perrache is open source and welcomes contributions. More details coming soon.
GNU AFFERO GENERAL PUBLIC LICENSE
- GitHub Issues: Report bugs or request features
- Discussions: Join the conversation
Built with the conviction that API discovery should be effortless, not a weeks-long search.
