Kubernetes-native AI automation platform for intelligent agentic sessions with multi-agent collaboration
Note: This project was formerly known as "vTeam". While the project has been rebranded to Ambient Code Platform, the name "vTeam" still appears in various technical artifacts for backward compatibility (see Legacy vTeam References below).
The Ambient Code Platform is an AI automation platform that combines Claude Code CLI with multi-agent collaboration capabilities. The platform enables teams to create and manage intelligent agentic sessions through a modern web interface.
- Intelligent Agentic Sessions: AI-powered automation for analysis, research, content creation, and development tasks
- Multi-Agent Workflows: Specialized AI agents model realistic software team dynamics
- Kubernetes Native: Built with Custom Resources, Operators, and proper RBAC for enterprise deployment
- Real-time Monitoring: Live status updates and job execution tracking
- π€ Amber Background Agent: Automated issue-to-PR workflows via GitHub Actions (quickstart)
Amber is a background agent that handles GitHub issues automatically:
- π€ Auto-Fix: Create issue with
amber:auto-fixlabel β Amber creates PR with linting/formatting fixes - π§ Refactoring: Label issue
amber:refactorβ Amber breaks large files, extracts patterns - π§ͺ Test Coverage: Use
amber:test-coverageβ Amber adds missing tests
Quick Links:
The platform consists of containerized microservices orchestrated via Kubernetes:
| Component | Technology | Description |
|---|---|---|
| Frontend | NextJS + Shadcn | User interface for managing agentic sessions |
| Backend API | Go + Gin | REST API for managing Kubernetes Custom Resources (multi-tenant: projects, sessions, access control) |
| Agentic Operator | Go | Kubernetes operator that watches CRs and creates Jobs |
| Claude Code Runner | Python + Claude Code CLI | Pod that executes AI with multi-agent collaboration capabilities |
- Create Session: User creates agentic session via web UI with task description
- API Processing: Backend creates
AgenticSessionCustom Resource in Kubernetes - Job Scheduling: Operator detects CR and creates Kubernetes Job with runner pod
- AI Execution: Pod runs Claude Code CLI with multi-agent collaboration for intelligent analysis
- Result Storage: Analysis results stored back in Custom Resource status
- UI Updates: Frontend displays real-time progress and completed results
- OpenShift Local (CRC) for local development or OpenShift cluster for production
- oc (OpenShift CLI) or kubectl v1.28+ configured to access your cluster
- Docker or Podman for building container images
- Container registry access (Docker Hub, Quay.io, ECR, etc.) for production
- Go 1.24+ for building backend services (if building from source)
- Node.js 20+ and npm for the frontend (if building from source)
- Anthropic API Key - Get from Anthropic Console
- Configure via web UI: Settings β Runner Secrets after deployment
Deploy using the default images from quay.io/ambient_code:
# From repo root, prepare env for deploy script (required once)
cp components/manifests/env.example components/manifests/.env
# Edit .env and set at least ANTHROPIC_API_KEY
# Deploy to ambient-code namespace (default)
make deploy
# Or deploy to custom namespace
make deploy NAMESPACE=my-namespace# Check pod status
oc get pods -n ambient-code
# Check services and routes
oc get services,routes -n ambient-code# Get the route URL
oc get route frontend-route -n ambient-code
# Or use port forwarding as fallback
kubectl port-forward svc/frontend-service 3000:3000 -n ambient-code- Access the web interface
- Navigate to Settings β Runner Secrets
- Add your Anthropic API key
- Access Web Interface: Navigate to your deployed route URL
- Create New Session:
- Prompt: Task description (e.g., "Review this codebase for security vulnerabilities and suggest improvements")
- Model: Choose AI model (Claude Sonnet/Haiku)
- Settings: Adjust temperature, token limits, timeout (default: 300s)
- Monitor Progress: View real-time status updates and execution logs
- Review Results: Download analysis results and structured output
- Code Analysis: Security reviews, code quality assessments, architecture analysis
- Technical Documentation: API documentation, user guides, technical specifications
- Project Planning: Feature specifications, implementation plans, task breakdowns
- Research & Analysis: Technology research, competitive analysis, requirement gathering
- Development Workflows: Code reviews, testing strategies, deployment planning
To build and deploy your own container images:
# Set your container registry
export REGISTRY="quay.io/your-username"
# Build all images
make build-all
# Push to registry (requires authentication)
make push-all REGISTRY=$REGISTRY
# Deploy with custom images
cd components/manifests
REGISTRY=$REGISTRY ./deploy.sh# Use Podman instead of Docker
make build-all CONTAINER_ENGINE=podman
# Build for specific platform
# Default is linux/amd64
make build-all PLATFORM=linux/arm64
# Build with additional flags
make build-all BUILD_FLAGS="--no-cache --pull"For cluster-based authentication and authorization, the deployment script can configure the Route host, create an OAuthClient, and set the frontend secret when provided a .env file. See the guide for details and a manual alternative:
The operator supports two modes for accessing Claude AI:
Use operator-config.yaml or operator-config-crc.yaml for standard deployments:
# Apply the standard config (Vertex AI disabled)
kubectl apply -f components/manifests/operator-config.yaml -n ambient-codeWhen to use:
- Standard cloud deployments without Google Cloud integration
- Local development with CRC/Minikube
- Any environment using direct Anthropic API access
Configuration: Sets CLAUDE_CODE_USE_VERTEX=0
Use operator-config-openshift.yaml for production OpenShift deployments with Vertex AI:
# Apply the Vertex AI config
kubectl apply -f components/manifests/operator-config-openshift.yaml -n ambient-codeWhen to use:
- Production deployments on Google Cloud
- Environments requiring Vertex AI integration
- Enterprise deployments with Google Cloud service accounts
Configuration: Sets CLAUDE_CODE_USE_VERTEX=1 and configures:
CLOUD_ML_REGION: Google Cloud region (default: "global")ANTHROPIC_VERTEX_PROJECT_ID: Your GCP project IDGOOGLE_APPLICATION_CREDENTIALS: Path to service account key file
Creating the Vertex AI Secret:
When using Vertex AI, you must create a secret containing your Google Cloud service account key:
# The key file MUST be named ambient-code-key.json
kubectl create secret generic ambient-vertex \
--from-file=ambient-code-key.json=ambient-code-key.json \
-n ambient-codeImportant Requirements:
- β
Secret name must be
ambient-vertex - β
Key file must be named
ambient-code-key.json - β Service account must have Vertex AI API access
- β Project ID in config must match the service account's project
Sessions have a configurable timeout (default: 300 seconds):
- Environment Variable: Set
TIMEOUT=1800for 30-minute sessions - CRD Default: Modify
components/manifests/crds/agenticsessions-crd.yaml - Interactive Mode: Set
interactive: truefor unlimited chat-based sessions
Configure AI API keys and integrations via the web interface:
- Settings β Runner Secrets: Add Anthropic API keys
- Project-scoped: Each project namespace has isolated secret management
- Security: All secrets stored as Kubernetes Secrets with proper RBAC
Pods Not Starting:
oc describe pod <pod-name> -n ambient-code
oc logs <pod-name> -n ambient-codeAPI Connection Issues:
oc get endpoints -n ambient-code
oc exec -it <pod-name> -- curl http://backend-service:8080/healthJob Failures:
oc get jobs -n ambient-code
oc describe job <job-name> -n ambient-code
oc logs <failed-pod-name> -n ambient-code# Check all resources
oc get all -l app=ambient-code -n ambient-code
# View recent events
oc get events --sort-by='.lastTimestamp' -n ambient-code
# Test frontend access
curl -f http://localhost:3000 || echo "Frontend not accessible"
# Test backend API
kubectl port-forward svc/backend-service 8080:8080 -n ambient-code &
curl http://localhost:8080/health- API Key Management: Store Anthropic API keys securely in Kubernetes secrets
- RBAC: Configure appropriate role-based access controls
- Network Policies: Implement network isolation between components
- Image Scanning: Scan container images for vulnerabilities before deployment
- Prometheus Metrics: Configure metrics collection for all components
- Log Aggregation: Set up centralized logging (ELK, Loki, etc.)
- Alerting: Configure alerts for pod failures, resource exhaustion
- Health Checks: Implement comprehensive health endpoints
- Horizontal Pod Autoscaling: Configure HPA based on CPU/memory usage
- Resource Limits: Set appropriate resource requests and limits
- Node Affinity: Configure pod placement for optimal resource usage
Single Command Setup:
# Start complete local development environment
make dev-startWhat this provides:
- β Full OpenShift cluster with CRC
- β Real OpenShift authentication and RBAC
- β Production-like environment
- β Automatic image builds and deployments
- β Working frontend-backend integration
Prerequisites:
# Install CRC (macOS)
brew install crc
# Get Red Hat pull secret (free):
# 1. Visit: https://console.redhat.com/openshift/create/local
# 2. Download pull secret to ~/.crc/pull-secret.json
# 3. Run: crc setup
# Then start development
make dev-startHot Reloading (optional):
# Terminal 1: Start with development images
DEV_MODE=true make dev-start
# Terminal 2: Enable file sync for hot-reloading
make dev-syncAccess URLs:
- Frontend:
https://vteam-frontend-vteam-dev.apps-crc.testing - Backend:
https://vteam-backend-vteam-dev.apps-crc.testing/health - Console:
https://console-openshift-console.apps-crc.testing
# Build all images locally
make build-all
# Build specific components
make build-frontend
make build-backend
make build-operator
make build-runnervTeam/
βββ components/ # π Ambient Code Platform Components
β βββ frontend/ # NextJS web interface
β βββ backend/ # Go API service
β βββ operator/ # Kubernetes operator
β βββ runners/ # AI runner services
β β βββ claude-code-runner/ # Python Claude Code CLI service
β βββ manifests/ # Kubernetes deployment manifests
βββ docs/ # Documentation
β βββ OPENSHIFT_DEPLOY.md # Detailed deployment guide
β βββ OPENSHIFT_OAUTH.md # OAuth configuration
βββ tools/ # Supporting development tools
β βββ vteam_shared_configs/ # Team configuration management
β βββ mcp_client_integration/ # MCP client library
βββ Makefile # Build and deployment automation
- RBAC: Comprehensive role-based access controls
- Network Policies: Component isolation and secure communication
- Secret Management: Kubernetes-native secret storage with encryption
- Image Scanning: Vulnerability scanning for all container images
- Health Checks: Comprehensive health endpoints for all services
- Metrics: Prometheus-compatible metrics collection
- Logging: Structured logging with OpenShift logging integration
- Alerting: Integration with OpenShift monitoring and alerting
- Horizontal Pod Autoscaling: Auto-scaling based on CPU/memory metrics
- Resource Management: Proper requests/limits for optimal resource usage
- Job Queuing: Intelligent job scheduling and resource allocation
- Multi-tenancy: Project-based isolation with shared infrastructure
We welcome contributions! Please follow these guidelines to ensure code quality and consistency.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following the existing patterns
- Run code quality checks (see below)
- Add tests if applicable
- Commit with conventional commit messages
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Before committing Go code, run these checks locally:
# Backend
cd components/backend
gofmt -l . # Check formatting
go vet ./... # Run go vet
golangci-lint run # Run full linting suite
# Operator
cd components/operator
gofmt -l . # Check formatting
go vet ./... # Run go vet
golangci-lint run # Run full linting suiteInstall golangci-lint:
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latestAuto-format your code:
# Format all Go files
gofmt -w components/backend components/operatorCI/CD: All pull requests automatically run these checks via GitHub Actions. Your PR must pass all linting checks before merging.
cd components/frontend
npm run lint # ESLint checks
npm run type-check # TypeScript checks (if available)
npm run format # Prettier formatting# Backend tests
cd components/backend
make test # Run all tests
make test-unit # Unit tests only
make test-integration # Integration tests
# Operator tests
cd components/operator
go test ./... -v # Run all tests
# Frontend tests
cd components/frontend
npm test # Run test suiteRun automated end-to-end tests in a local kind cluster:
make e2e-test # Full test suite (setup, deploy, test, cleanup)Or run steps individually:
cd e2e
./scripts/setup-kind.sh # Create kind cluster
./scripts/deploy.sh # Deploy vTeam
./scripts/run-tests.sh # Run Cypress tests
./scripts/cleanup.sh # Clean upThe e2e tests deploy the complete vTeam stack to a kind (Kubernetes in Docker) cluster and verify core functionality including project creation and UI navigation. Tests run automatically in GitHub Actions on every PR.
See e2e/README.md for detailed documentation, troubleshooting, and development guide.
- To ensure maximum focus and efficiency for the current RFE (Request for Enhancement) pilot, we are temporarily streamlining the active agent pool.
- Active Agents (Focused Scope): The 5 agents required for this specific RFE workflow are currently located in the agents folder.
- Agent Bullpen (Holding Pattern): All remaining agent definitions have been relocated to the "agent bullpen" folder. This transition does not signify the deprecation of any roles.
- Future Planning: Agents in the "agent bullpen" are designated for future reintegration and will be actively utilized as we expand to address subsequent processes and workflows across the organization.
- Update relevant documentation when changing functionality
- Follow existing documentation style (Markdown)
- Add code comments for complex logic
- Update CLAUDE.md if adding new patterns or standards
- Deployment Guide: docs/OPENSHIFT_DEPLOY.md
- OAuth Setup: docs/OPENSHIFT_OAUTH.md
- Architecture Details: diagrams/
- API Documentation: Available in web interface after deployment
While the project is now branded as Ambient Code Platform, the name "vTeam" still appears in various technical components for backward compatibility and to avoid breaking changes. You will encounter "vTeam" or "vteam" in:
- GitHub Repository:
github.com/ambient-code/vTeam(repository name unchanged) - Container Images:
vteam_frontend,vteam_backend,vteam_operator,vteam_claude_runner - Kubernetes API Group:
vteam.ambient-code(used in Custom Resource Definitions) - Development Namespace:
vteam-dev(local development environment)
- Local Development Routes:
https://vteam-frontend-vteam-dev.apps-crc.testinghttps://vteam-backend-vteam-dev.apps-crc.testing
- File paths: Repository directory structure (
/path/to/vTeam/...) - Go package references: Internal Kubernetes resource types
- RBAC resources: ClusterRole and RoleBinding names
- Makefile targets: Development commands reference
vteam-devnamespace - Kubernetes resources: Deployment names (
vteam-frontend,vteam-backend,vteam-operator) - Environment variables:
VTEAM_VERSIONin frontend deployment
These technical references remain unchanged to maintain compatibility with existing deployments and to avoid requiring migration for current users. Future major versions may fully transition these artifacts to use "Ambient Code Platform" or "ambient-code" naming.
This project is licensed under the MIT License - see the LICENSE file for details.