This repository contains ArgoCD applications for deploying Kong data planes for multiple customers using GitOps with mTLS authentication to Konnect control planes.
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Konnect │◄───│ Kong DataPlane │ │ ArgoCD │
│ Control Plane │ │ (Customer) │◄───│ GitOps │
│ │ │ │ │ │
│ • NAB CP │ │ • kong-nab │ │ • App-of-Apps │
│ • CBA CP │ │ • kong-cba │ │ • Auto Sync │
│ • ANZ CP │ │ • kong-anz │ │ • Self Heal │
└─────────────────┘ └──────────────────┘ └─────────────────┘
▲ │ │
│ ▼ ▼
│ ┌──────────────────┐ ┌─────────────────┐
│ │ Kubernetes │ │ Redis │
│ │ TLS Secrets │ │ (Dependency) │
│ │ │ │ │
└──────────────│ • Client Certs │ │ • Per Customer │
mTLS │ • CA Certs │ │ • Helm Charts │
└──────────────────┘ └─────────────────┘
kong-dataplane-gitops/
├── argocd/
│ ├── projects/
│ │ └── kong-dataplane-project.yaml # ArgoCD project
│ └── applications/
│ ├── app-of-apps.yaml # Parent application
│ └── customers/ # Customer applications
│ ├── base-values.yaml # Common Kong config
│ ├── cba/
│ │ ├── kong.yaml # CBA Kong deployment
│ │ ├── redis.yaml # CBA Redis deployment
│ │ └── values.yaml # CBA-specific config
│ ├── nab/
│ │ ├── kong.yaml # NAB Kong deployment
│ │ ├── redis.yaml # NAB Redis deployment
│ │ └── values.yaml # NAB-specific config
│ └── anz/
│ ├── kong.yaml # ANZ Kong deployment
│ ├── redis.yaml # ANZ Redis deployment
│ └── values.yaml # ANZ-specific config
└── README.md
- Multi-tenant: Separate namespaces per customer (kong-nab, kong-cba, kong-anz)
- mTLS Authentication: Client certificates for secure Konnect communication
- Redis Dependencies: Dedicated Redis instances per customer
- Observability: Integrated with Prometheus, OpenTelemetry, and HTTP Log plugins
- App-of-Apps Pattern: Parent application manages customer applications
- Automated Sync: Continuous deployment from Git repository
- Self-Healing: Automatic drift correction
- Multi-Environment: Support for different customer configurations
- Per-Customer: Dedicated Redis instance for each customer
- ArgoCD Managed: Deployed as separate ArgoCD applications
- Simple Deployment: Basic Redis without clustering for development
- Namespace Isolation: Redis deployed in same namespace as Kong
- Client Certificates: Generated using OpenSSL with shared CA
- Kubernetes Secrets: TLS certificates stored as K8s secrets
- Per-Customer: Isolated certificate management per customer
- Kubernetes cluster with ArgoCD installed
- Konnect control planes created for each customer
- Client certificates generated and stored
Deploy and configure Vault for certificate storage:
# Deploy Vault
kubectl apply -f ../vault/argocd/projects/vault-project.yaml
kubectl apply -f ../vault/argocd/applications/vault.yaml
# Initialize and unseal Vault (see vault/README.md)
# Store certificates in Vault (certificates already stored at):
# - secret/tls/cba (cert + key)
# - secret/tls/nab (cert + key)
# - secret/tls/anz (cert + key)
# - secret/tls/ca (cert only)Kong data planes use Vault references for certificates:
# In customer values files
env:
cluster_cert: "{vault://hcv/secret/tls/nab/cert}"
cluster_cert_key: "{vault://hcv/secret/tls/nab/key}"
# Vault configuration in base-values.yaml
env:
vaults: hcv
vault_hcv_protocol: http
vault_hcv_host: vault.vault.svc.cluster.local
vault_hcv_port: 8200
vault_hcv_token: "hvs.cZUnRwmm0JZrb64R4KISsXKW"Redis is deployed alongside Kong for each customer:
# Redis applications are automatically created by app-of-apps
# Each customer gets their own Redis instance:
# - redis-nab (in kong-nab namespace)
# - redis-cba (in kong-cba namespace)
# - redis-anz (in kong-anz namespace)
# Redis applications are defined in each customer directory:
# - argocd/applications/customers/cba/redis.yaml
# - argocd/applications/customers/nab/redis.yaml
# - argocd/applications/customers/anz/redis.yaml
# Verify Redis deployment
kubectl get pods -n kong-nab | grep redis
kubectl get pods -n kong-cba | grep redis
kubectl get pods -n kong-anz | grep redisDeploy the Kong data planes via ArgoCD:
# Apply ArgoCD project
kubectl apply -f argocd/projects/kong-dataplane-project.yaml
# Deploy app-of-apps (will create all customer applications including Redis)
kubectl apply -f argocd/applications/app-of-apps.yaml
# Verify deployment
argocd app list | grep kong-dataplane
argocd app list | grep redis- Common Settings: Shared Kong configuration across all customers
- Plugins: Prometheus, OpenTelemetry, HTTP Log plugins enabled
- Resources: CPU/memory limits and requests
- Security: Pod security context and service account
Each customer has a dedicated directory with:
- Kong Application:
customers/{customer}/kong.yaml - Redis Application:
customers/{customer}/redis.yaml - Values File:
customers/{customer}/values.yaml - Common Base:
customers/base-values.yaml(shared across all customers)
Configuration includes:
- Control Plane Endpoint: Unique Konnect CP URL
- Vault Integration: References to customer-specific TLS certificates in Vault
- Namespace: Isolated deployment namespace
- Redis Configuration: Connection to customer-specific Redis instance
Each Kong data plane is configured to use its dedicated Redis:
# In customer values files
env:
database: "off" # Use Redis instead of PostgreSQL
# Redis connection automatically configured via service discovery:
# redis-nab-master.kong-nab.svc.cluster.local:6379
# redis-cba-master.kong-cba.svc.cluster.local:6379
# redis-anz-master.kong-anz.svc.cluster.local:6379env:
cluster_control_plane: "https://your-cp-id.cp0.konghq.com"
cluster_server_name: "your-cp-id.cp0.konghq.com"
cluster_telemetry_endpoint: "https://your-cp-id.tp0.konghq.com"
cluster_cert: "{vault://hcv/secret/tls/nab/cert}"
cluster_cert_key: "{vault://hcv/secret/tls/nab/key}"
# Redis connection automatically configured via service discovery:
# redis-nab-master.kong-nab.svc.cluster.local:6379Kong data planes are configured with observability plugins:
- Metrics Endpoint:
/metricson status port (8100) - Scraped by: OpenTelemetry Collector in kong-observability namespace
- Traces: Sent to OTel Collector via OTLP
- Endpoint:
http://otel-collector-opentelemetry-collector.kong-observability.svc.cluster.local:4318/v1/traces
- Logs: Sent to Fluent Bit for processing
- Endpoint:
http://fluent-bit.kong-observability.svc.cluster.local:2020 - Correlation: Includes trace/span IDs for correlation
Each customer has a LoadBalancer service:
# Get service endpoints
kubectl get svc -n kong-nab kong-nab-dataplane-kong-proxy
kubectl get svc -n kong-cba kong-cba-dataplane-kong-proxy
kubectl get svc -n kong-anz kong-anz-dataplane-kong-proxy
# Port forward for testing
kubectl port-forward -n kong-nab svc/kong-nab-dataplane-kong-proxy 8000:80kubectl port-forward -n kong-nab svc/kong-nab-dataplane-kong-admin 8001:8001# List all applications
argocd app list
# Get application details
argocd app get kong-dataplane-nab
# Check sync status
kubectl describe application kong-dataplane-nab -n argocd# Check Kong logs
kubectl logs -n kong-nab deployment/kong-nab-dataplane-kong
# Check control plane connectivity
kubectl exec -n kong-nab deployment/kong-nab-dataplane-kong -- kong health
# Test proxy functionality
curl -i http://localhost:8000/# Verify TLS secret exists
kubectl get secret kong-nab-client-tls -n kong-nab
# Check certificate details
kubectl get secret kong-nab-client-tls -n kong-nab -o yaml
# Verify certificate validity
openssl x509 -in nab-client.crt -text -noout# Check Redis pods
kubectl get pods -n kong-nab | grep redis
kubectl get pods -n kong-cba | grep redis
kubectl get pods -n kong-anz | grep redis
# Check Redis services
kubectl get svc -n kong-nab | grep redis
kubectl get svc -n kong-cba | grep redis
kubectl get svc -n kong-anz | grep redis
# Test Redis connectivity from Kong
kubectl exec -n kong-nab deployment/kong-nab-dataplane-kong -- redis-cli -h redis-nab-master ping
kubectl exec -n kong-cba deployment/kong-cba-dataplane-kong -- redis-cli -h redis-cba-master ping
kubectl exec -n kong-anz deployment/kong-anz-dataplane-kong -- redis-cli -h redis-anz-master ping
# Check Redis logs
kubectl logs -n kong-nab deployment/redis-nab-master- Client Certificates: Unique per customer for Konnect authentication
- CA Validation: Konnect validates client certificates against configured CA
- Certificate Rotation: Certificates can be rotated via Kubernetes secrets
- Namespaces: Each customer isolated in separate namespace
- Network Policies: Can be applied for additional network segmentation
- RBAC: Service accounts with minimal required permissions
- Kubernetes Secrets: TLS certificates stored as native K8s secrets
- GitOps Safe: No secrets stored in Git repository
- Rotation: Certificates can be updated without application restart
# Scale Kong deployment
kubectl scale deployment kong-nab-dataplane-kong -n kong-nab --replicas=3- Resource Limits: Configured per customer in values files
- HPA: Horizontal Pod Autoscaler can be enabled
- Node Affinity: Can be configured for customer isolation
- Simple Deployment: Single Redis instance per customer (no clustering)
- Persistence: Data persisted to avoid data loss on pod restart
- Resource Limits: Configured for development workloads
- Service Discovery: Kong connects via Kubernetes service DNS
# Scale Redis (if needed)
kubectl scale deployment redis-nab-master -n kong-nab --replicas=1
# Redis typically runs as single instance for development
# For production, consider Redis Sentinel or Cluster mode- Health Checks: Built-in Redis health checks
- Metrics: Can be scraped by Prometheus if Redis exporter is enabled
- Logs: Available via kubectl logs
All configuration follows GitOps principles:
- Version Control: All changes tracked in Git
- Pull Request Workflow: Changes reviewed before deployment
- Automated Deployment: ArgoCD handles deployment automation
- Rollback: Easy rollback via Git history or ArgoCD UI
- Dependencies: Redis automatically deployed with Kong data planes*: Easy rollback via Git history or ArgoCD UI