This chart is a clean v1 Kubernetes chart for the current PostHog service topology.
It intentionally does not preserve the old PostHog/charts-clickhouse values API. That repository is useful historical context, but its dependency stack and workload split are outdated. PostHog also published the background for ending official chart support in Sunsetting Helm support for self-hosted PostHog.
The PostHog-owned runtime images follow the upstream container defaults and use the mutable master tag by default. global.imagePullPolicy defaults to Always so Kubernetes refreshes those images on rollout. Override images.*.tag in production when you need a controlled rollout.
- Kubernetes
>=1.28 - Helm with OCI registry support
- A default
StorageClassfor the bundled evaluation profile - A working Ingress controller when
ingress.enabled=true - External DNS pointing
global.domain/ingress.hostat the cluster when using a public URL
profile.mode=bundleddeploys PostHog plus bundled backing services through maintained subcharts where practical. Use it for non-production evaluation.profile.mode=externaldeploys PostHog workloads and uses managed dependencies where configured. Kafka can still use the bundled Redpanda subchart by leavingexternal.kafka.hostsempty and enablingsubcharts.redpanda.enabled.
Install the bundled profile for a non-production evaluation:
helm upgrade --install posthog . \
--namespace posthog \
--create-namespace \
--set global.domain=posthog.example.com \
--set global.siteUrl=https://posthog.example.comInstall from the GitHub Container Registry after a chart version has been published:
helm upgrade --install posthog oci://ghcr.io/mayflower/posthog-helm/posthog \
--version 0.2.29 \
--namespace posthog \
--create-namespace \
--set global.domain=posthog.example.com \
--set global.siteUrl=https://posthog.example.comFor local evaluation without DNS, disable ingress and port-forward the web service:
helm upgrade --install posthog . \
--namespace posthog \
--create-namespace \
--set ingress.enabled=false \
--set global.domain=localhost \
--set global.siteUrl=http://localhost:8000
kubectl -n posthog port-forward svc/posthog-posthog-web 8000:8000Production installs should use profile.mode=external, explicitly managed secrets, and a reviewed values file. Start from examples/external-values.yaml, replace every *.example.com endpoint, and create the referenced secrets before installing.
examples/external-values.yaml assumes managed Temporal and managed session-recording storage, so it disables the bundled temporal and seaweedfs components. If you want external Postgres/Redis/ClickHouse but bundled Temporal, keep components.temporal.enabled=true and set external.temporal.host to the templated chart service host as shown in values.yaml.
Generate runtime secrets:
kubectl create namespace posthog
SECRET_KEY="$(openssl rand -hex 50)"
ENCRYPTION_SALT_KEYS="$(openssl rand -hex 16)"
CAPTURE_LOGS_JWT_SECRET="$(openssl rand -hex 32)"
LIVESTREAM_JWT_SECRET="$(openssl rand -hex 32)"
INTERNAL_API_SECRET="$(openssl rand -hex 32)"
kubectl -n posthog create secret generic posthog-runtime-secrets \
--from-literal=SECRET_KEY="${SECRET_KEY}" \
--from-literal=ENCRYPTION_SALT_KEYS="${ENCRYPTION_SALT_KEYS}" \
--from-literal=CAPTURE_LOGS_JWT_SECRET="${CAPTURE_LOGS_JWT_SECRET}" \
--from-literal=LIVESTREAM_JWT_SECRET="${LIVESTREAM_JWT_SECRET}" \
--from-literal=INTERNAL_API_SECRET="${INTERNAL_API_SECRET}"ENCRYPTION_SALT_KEYS must contain one or more comma-separated 32-character URL-safe keys. openssl rand -hex 16 produces a valid single key. Keep old keys in the comma-separated list when rotating so existing encrypted integration data remains decryptable.
Create provider credential secrets matching your production values file:
kubectl -n posthog create secret generic posthog-postgres \
--from-literal=password='<postgres-password>'
kubectl -n posthog create secret generic posthog-redis \
--from-literal=password='<redis-password>'
kubectl -n posthog create secret generic posthog-clickhouse \
--from-literal=password='<clickhouse-password>'
kubectl -n posthog create secret generic posthog-object-storage \
--from-literal=access-key='<object-storage-access-key>' \
--from-literal=secret-key='<object-storage-secret-key>'
kubectl -n posthog create secret generic posthog-session-recording \
--from-literal=access-key='<session-recording-access-key>' \
--from-literal=secret-key='<session-recording-secret-key>'Install from the published OCI chart:
helm upgrade --install posthog oci://ghcr.io/mayflower/posthog-helm/posthog \
--version 0.2.29 \
--namespace posthog \
-f ./values.production.yamlInstall from a local checkout:
helm upgrade --install posthog . \
--namespace posthog \
-f ./examples/external-values.yamlexamples/external-values.yaml is a renderable template, not a production-ready endpoint list. Review these dependencies before installation:
| Dependency | Values | Requirements |
|---|---|---|
| PostgreSQL | external.postgres.* |
Reachable from the namespace. The configured user must own or be able to migrate the configured database. The chart currently uses the same Postgres URL for DATABASE_URL, PERSONS_DATABASE_URL, and BEHAVIORAL_COHORTS_DATABASE_URL. |
| Redis | external.redis.* |
Reachable Redis endpoint. Use external.redis.passwordSecret for password auth, or remove it if your endpoint has no password. Set external.redis.tls=true only for TLS-enabled Redis endpoints. |
| Kafka or Redpanda | external.kafka.hosts or bundled Redpanda |
Plain Kafka bootstrap string by default. If you need SASL/TLS, add the required PostHog env vars under the affected components.*.extraEnv and manage topics externally unless rpk can connect with the same settings. |
| ClickHouse | external.clickhouse.* |
The configured user needs enough privileges for PostHog migrations: database/table creation, materialized views, dictionaries, Kafka-engine tables, named collections, and SYSTEM FLUSH LOGS. Set cluster/migrationsCluster when using replicated clusters. |
| Object storage | external.objectStorage.* |
S3-compatible endpoint and bucket for general object storage. Create the bucket before installing when the provider does not auto-create buckets. |
| Session recording storage | external.sessionRecording.* |
S3-compatible endpoint and credentials for replay payloads. This can share the same provider/secret as object storage, but keep a separate bucket or prefix operationally. |
| Temporal | external.temporal.*, components.temporal.enabled |
Existing Temporal frontend endpoint, or the bundled Temporal component with external.temporal.host pointing at the chart service. Disable components.temporal only when you provide managed Temporal. |
| OpenSearch | external.opensearch.host |
Optional but recommended for search-backed features. Include the URL scheme when TLS is used, for example https://opensearch.example.com:9200. |
For production, create a runtime secret and set secrets.existingSecret. The secret must contain:
SECRET_KEYENCRYPTION_SALT_KEYSCAPTURE_LOGS_JWT_SECRETLIVESTREAM_JWT_SECRETINTERNAL_API_SECRET
It must also contain these keys when you do not configure the provider-specific external secret refs:
CLICKHOUSE_PASSWORDOBJECT_STORAGE_ACCESS_KEY_IDOBJECT_STORAGE_SECRET_ACCESS_KEYSESSION_RECORDING_V2_S3_ACCESS_KEY_IDSESSION_RECORDING_V2_S3_SECRET_ACCESS_KEY
The bundled defaults are meant to render and run a self-contained non-production stack. Replace them before real use.
External mode can use separate provider-managed secrets for service credentials:
external:
postgres:
host: postgres.example.com
port: 5432
database: posthog
user: posthog
sslMode: require
passwordSecret:
name: posthog-postgres
key: password
redis:
host: redis.example.com
port: 6379
database: 0
tls: false
passwordSecret:
name: posthog-redis
key: password
clickhouse:
passwordSecret:
name: posthog-clickhouse
key: password
objectStorage:
accessKeySecret:
name: posthog-object-storage
key: access-key
secretKeySecret:
name: posthog-object-storage
key: secret-key
sessionRecording:
accessKeySecret:
name: posthog-session-recording
key: access-key
secretKeySecret:
name: posthog-session-recording
key: secret-keyWhen external.postgres.passwordSecret.name is set, the chart builds DATABASE_URL from host/user/database, appends sslmode/params, and injects POSTGRES_PASSWORD from that secret. When external.redis.passwordSecret.name is set, the chart injects REDIS_PASSWORD and builds Redis URLs with Kubernetes env expansion. Logs and traces ingestion receive the Redis URL because PostHog's current Node.js Redis pool reads credentials from that URL for those components. This avoids putting service passwords in values files.
The kafkaInit job creates the topics in kafka.topics before migrations and workloads start. It uses Redpanda's rpk CLI against KAFKA_HOSTS.
Use the built-in topic job only when rpk topic list --brokers "$KAFKA_HOSTS" and rpk topic create ... work from inside the cluster without extra SASL/TLS flags. For managed Kafka, pre-create topics yourself and disable the job:
components:
kafkaInit:
enabled: falseKeep kafka.defaultPartitions and kafka.defaultReplicationFactor aligned with your broker policy when the chart creates topics. Override kafka.topics when you use custom PostHog topic names or broker-side topic management.
global.siteUrl must be the externally reachable PostHog URL. Event capture, feature flags, session recording, and redirects depend on it. ingress.host defaults to global.domain when omitted.
Example with cert-manager and nginx:
global:
domain: posthog.example.com
siteUrl: https://posthog.example.com
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt
tls:
- secretName: posthog-tls
hosts:
- posthog.example.comExample with Traefik:
ingress:
enabled: true
className: traefik
annotations:
cert-manager.io/cluster-issuer: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
tls:
- secretName: posthog-tls
hosts:
- posthog.example.comExample with an existing TLS secret:
ingress:
enabled: true
className: nginx
tls:
- secretName: existing-posthog-tls
hosts:
- posthog.example.comDependencies are vendored as unpacked chart directories because Helm 4 linting expects directories, while helm dependency update writes archives.
helm lint --strict .
helm template posthog . > /tmp/posthog.yaml
helm template posthog . -f ./examples/external-values.yaml > /tmp/posthog-external.yaml
helm template posthog oci://ghcr.io/mayflower/posthog-helm/posthog \
--version 0.2.29 \
-f ./values.production.yaml > /tmp/posthog-production.yamlRefresh dependencies after changing Chart.yaml dependency versions:
helm dependency update .
for archive in ./charts/*.tgz; do tar -xzf "$archive" -C ./charts; done
rm ./charts/*.tgzCheck that the install jobs and core pods completed:
kubectl -n posthog get jobs
kubectl -n posthog get pods
kubectl -n posthog logs job/posthog-posthog-migrate
kubectl -n posthog logs job/posthog-posthog-kafka-initCheck the externally routed app:
curl -I https://posthog.example.com/
curl -fsS https://posthog.example.com/preflight?mode=live
curl -fsS https://posthog.example.com/flags/?v=2
curl -fsS -X POST https://posthog.example.com/capture/ \
-H 'Content-Type: application/json' \
--data '{"api_key":"phc_replace_me","event":"helm_test","properties":{}}'The /capture/ request is only a transport check until you replace api_key with a real project key from the PostHog UI.
For local port-forward checks:
kubectl -n posthog port-forward svc/posthog-posthog-web 8000:8000
curl -fsS http://localhost:8000/preflight?mode=liveThe default profile stays a generic PostHog install and keeps newer or heavier feature surfaces disabled until you explicitly opt in. These components render from the same generic workload template and inherit the chart's Postgres, Redis, Kafka, ClickHouse, Temporal, object-storage, scheduling, and monitoring settings.
Enable the components you need under components:
components:
embeddingWorker:
enabled: true
extraEnv:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: posthog-llm-provider
key: openai-api-key
batchImportWorker:
enabled: true
webhookS3Sink:
enabled: true
ingestionMetrics:
enabled: true
recordingRasterizer:
enabled: trueAvailable optional components:
embeddingWorkerconsumesdocument_embeddings_input, writesclickhouse_document_embeddings, and emitsdocument_embedding_results. It needs an embedding provider key throughextraEnv.batchImportWorkerprocesses batch import jobs and emits into the normal capture ingestion topics.webhookS3Sinkconsumesdata_warehouse_source_webhooksand writes webhook payload batches to the configured object storage.ingestionMetricsruns the Node.js metrics ingestion consumer for themetrics_ingestiontopic family.recordingRasterizerruns the dedicated Chromium/ffmpeg recording rasterizer image for video exports and uses the chart's object-storage credentials.
The chart does not include llmGateway. Several prominent PostHog AI assistant, Slack, research-agent, and session-summary flows in the current PostHog source cross into ee.hogai/ee.models; keep those out of this generic FOSS-oriented chart until a self-hosted FOSS runtime path is explicit upstream.
PostHog's services/mcp code is not included here as a Kubernetes service. Upstream currently packages that server as a Cloudflare Worker with Durable Objects, while its Dockerfile is only an mcp-remote client wrapper to https://mcp.posthog.com/mcp. A self-hosted MCP service would need a separate upstream-supported server image or a deliberate port of the Worker runtime to a normal HTTP service.
The bundled ClickHouse profile grants the PostHog app user full ClickHouse privileges because PostHog migrations create databases, replicated tables, Kafka-engine tables, dictionaries, materialized views, and named-collection based Kafka engines. The migration job runs SYSTEM FLUSH LOGS before PostHog migrations so ClickHouse system log tables such as system.crash_log exist before PostHog creates materialized views over them. When you use an external ClickHouse service, provision the configured external.clickhouse.user with equivalent migration privileges before installing the chart.
Ingress and the optional Caddy proxy are generated from routing.routes. Add or change public paths there so both surfaces stay aligned.
All workload components support the shared scheduling and availability controls:
- Global defaults:
global.nodeSelector,global.affinity,global.tolerations,global.topologySpreadConstraints,global.priorityClassName, andglobal.imagePullSecrets. - Per-component overrides: the same scheduling fields under
components.<name>. - Per-component
autoscalingcreates anautoscaling/v2HPA. - Per-component
pdbcreates apolicy/v1PodDisruptionBudget. - Stateful component
persistencesupportssize,storageClass, andaccessModes. monitoring.serviceMonitor.enabledcreates Prometheus OperatorServiceMonitorresources for component ports named inmonitoring.serviceMonitor.portNames.
Internal component URLs are generated from Helm release-aware service names. Do not hardcode short Docker Compose service names such as plugins or recording-api in production overrides; use the posthog.serviceHost, posthog.serviceUrl, and posthog.temporalAddress helpers when adding new component env vars.