From 0613fc0c097e2ee029c643385e63e13e28080bfa Mon Sep 17 00:00:00 2001 From: Alexandre Lamarre Date: Mon, 15 Sep 2025 17:10:04 -0400 Subject: [PATCH 1/3] docs: add health check configuration fix pre-commit --- content/docs/internals/health-checks.mdx | 249 +++++++++++++++++++++++ cspell.json | 5 +- 2 files changed, 253 insertions(+), 1 deletion(-) create mode 100644 content/docs/internals/health-checks.mdx diff --git a/content/docs/internals/health-checks.mdx b/content/docs/internals/health-checks.mdx new file mode 100644 index 000000000..73e201ab5 --- /dev/null +++ b/content/docs/internals/health-checks.mdx @@ -0,0 +1,249 @@ +--- +title: 'Pomerium Health Checks' +description: 'Learn how to configure health checks for Pomerium deployments' +sidebar_label: 'Health Checks' +lang: en-US +keywords: + - pomerium + - health + - checks + - ready + - readiness + - startup + - docker + - kubernetes + - upgrade +--- + +# Health Checks + +This page provides an overview of how to configure internal health checks on Pomerium instances. + +Health checks allow you to automatically monitor Pomerium instances and restart or replace them if they become unresponsive or fail. In Kubernetes, it ensures smooth rollout of deployments by waiting for instances to become healthy before terminating the old ones. On startup, these health checks primarily signal that the Pomerium instance is ready to receive traffic. + +## Overview + +In Pomerium, these health checks will report on the status of: + +- Readiness and health of the embedded envoy instance +- Health of the configuration syncing mechanisms +- Health of the configuration management server +- Storage backend health, where applicable + +## HTTP Probes + +:::warning + +HTTP health checks may not suit all environments. For instance, in environments with limited database connectivity or reliability, they can trigger undesired probe failures. Consider using the [exclude filters](#filters) with the CLI probe instead. + +::: + +Pomerium exposes http probe endpoints on `127.0.0.1:28080` by default. This server exposes `/status`, `/startupz`, `/readyz` and `/healthz` for consumption by container orchestration frameworks, like Kubernetes. + +For an overview on how probes work with Kubernetes see the [upstream documentation](https://kubernetes.io/docs/concepts/configuration/liveness-readiness-startup-probes/) and [their configuration options](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). + +To configure these for automatic checks in kubernetes on your pomerium instance you can add to your deployment manifest: + +```yaml +containers: + - name: pomerium + # ... + startupProbe: + httpGet: + path: /startupz + port: 28080 + livenessProbe: + httpGet: + path: /healthz + port: 28080 + initialDelaySeconds: 15 + periodSeconds: 60 + failureThreshold: 10 + readinessProbe: + httpGet: + path: /readyz + port: 28080 + initialDelaySeconds: 15 + periodSeconds: 60 + failureThreshold: 5 +``` + +### Startup + +The startup endpoint `/startupz` waits for all components to report ready - in practice the latest version of the configuration should be synced before starting. + +### Readiness + +### Liveness + +The `/healthz` endpoint + +### Debugging + +If your pods are frequently restarted due to failed checks, the endpoints provide human-readable status reports from Pomerium. For example: + +```json +{ + "authenticate.service": { + "status": "RUNNING" + }, + "authorize.service": { + "status": "RUNNING" + }, + "config.databroker.build": { + "status": "RUNNING" + }, + "databroker.sync.initial": { + "status": "RUNNING" + }, + "envoy.server": { + "status": "RUNNING" + }, + "proxy.service": { + "status": "RUNNING" + }, + "storage.backend": { + "status": "RUNNING", + "attributes": [ + { + "Key": "backend", + "Value": "in-memory" + } + ] + }, + "xds.cluster": { + "status": "RUNNING" + }, + "xds.listener": { + "status": "RUNNING" + }, + "xds.route-configuration": { + "status": "RUNNING" + } +} +``` + +which can help identify which component(s) are unhealthy and surface errors, which can help point to remediation steps. + +If your instance often reports unhealthy components but still serves traffic normally, consider using CLI-based probe checks in Kubernetes and apply their [filter feature](#filters). + +## CLI + +Pomerium images include the pomerium binary, which provides a health subcommand. This command exits with status 1 if the instance is unhealthy, and 0 if healthy. + +The CLI health check can be used in environments like Docker Compose and ECS. See the [Compose documentation](https://docs.docker.com/reference/compose-file/services/#healthcheck) and [ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_HealthCheck.html) for details. + +``` +Usage: + pomerium health [flags] + +Flags: + -e, --exclude stringArray list of health checks to exclude from consideration + -a, --health-addr string port of the pomerium health check service (default "127.0.0.1:28080") + -h, --help help for health + -v, --verbose prints extra health information +``` + +An example configuration for Pomerium in docker compose looks like: + +```yaml +healthcheck: + interval: 30s + retries: 5 + test: + - CMD + - pomerium + - health + timeout: 5s +``` + +In Kubernetes you can deploy like: + +```yaml +livenessProbe: + containers: + - name: pomerium + # ... + livenessProbe: + exec: + command: + - pomerium + - health + - a + - $(POD_IP):28080 + initialDelaySeconds: 5 + periodSeconds: 30 + timeoutSeconds: 5 +``` + +### Filters + +Unlike HTTP probes, the CLI provides additional flexibility through exclude filters, which let you ignore specific internal conditions reported by Pomerium when evaluating health. + +For example: + +``` +pomerium health -e storage.backend -e config.databroker.build +``` + +This command returns exit code 0 even if the storage backend or configuration build is reported as unhealthy. + +### Debugging + +For debugging, use the -v verbose flag to display the status reported by Pomerium: + +``` +pomerium health -v +{ + "statuses": { + "authenticate.service": { + "status": "RUNNING" + }, + "authorize.service": { + "status": "RUNNING" + }, + "config.databroker.build": { + "status": "RUNNING" + }, + "databroker.sync.initial": { + "status": "RUNNING" + }, + "envoy.server": { + "status": "RUNNING" + }, + "proxy.service": { + "status": "RUNNING" + }, + "storage.backend": { + "status": "RUNNING", + "attributes": [ + { + "Key": "backend", + "Value": "in-memory" + } + ] + }, + "xds.cluster": { + "status": "RUNNING" + }, + "xds.listener": { + "status": "RUNNING" + }, + "xds.route-configuration": { + "status": "RUNNING" + } + }, + "checks": [ + "authenticate.service", + "storage.backend", + "databroker.sync.initial", + "xds.cluster", + "xds.route-configuration", + "envoy.server", + "authorize.service", + "config.databroker.build", + "proxy.service", + "xds.listener" + ] +} +``` diff --git a/cspell.json b/cspell.json index 263653bde..d52e3c872 100644 --- a/cspell.json +++ b/cspell.json @@ -241,7 +241,10 @@ "exaring", "fjhsb", "procs", - "REUSEPORT" + "REUSEPORT", + "startupz", + "healthz", + "readyz" ], "ignorePaths": [ "*.mp4", From 1040d56a5cbac7c9848c89bc4bc9baf805fde129 Mon Sep 17 00:00:00 2001 From: Alexandre Lamarre Date: Wed, 24 Sep 2025 13:33:17 -0400 Subject: [PATCH 2/3] add systemd health check documentation --- content/docs/internals/health-checks.mdx | 31 ++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/content/docs/internals/health-checks.mdx b/content/docs/internals/health-checks.mdx index 73e201ab5..852c46f0c 100644 --- a/content/docs/internals/health-checks.mdx +++ b/content/docs/internals/health-checks.mdx @@ -247,3 +247,34 @@ pomerium health -v ] } ``` + +## Systemd + +When running Pomerium via systemd, health checks are automatically configured. Pomerium communicates its health status to systemd using the sd_notify protocol. + +You can view the status with: + +```bash +systemctl status +``` + +If the service is unhealthy, an error with details will be displayed in the status section. + +### Disabling + +By default, the Pomerium systemd service includes: + +```toml +[Service] +Type=notify +#... +WatchdogSec=30s +``` + +To disable systemd health checks, remove these options from the service configuration. + +Additionally, since Pomerium sends sd_notify messages when started via systemd, you can explicitly disable this behavior by adding the following to your config.yaml: + +```yaml +health_check_systemd_disabled: true +``` From 625876e7848a1006389f40a4b6c718b8ad155ecb Mon Sep 17 00:00:00 2001 From: Alexandre Lamarre Date: Thu, 25 Sep 2025 12:20:30 -0400 Subject: [PATCH 3/3] readiness + liveness sections --- content/docs/internals/health-checks.mdx | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/content/docs/internals/health-checks.mdx b/content/docs/internals/health-checks.mdx index 852c46f0c..1e860328e 100644 --- a/content/docs/internals/health-checks.mdx +++ b/content/docs/internals/health-checks.mdx @@ -25,7 +25,7 @@ Health checks allow you to automatically monitor Pomerium instances and restart In Pomerium, these health checks will report on the status of: -- Readiness and health of the embedded envoy instance +- Readiness and health of the embedded Envoy instance - Health of the configuration syncing mechanisms - Health of the configuration management server - Storage backend health, where applicable @@ -74,9 +74,15 @@ The startup endpoint `/startupz` waits for all components to report ready - in p ### Readiness +The /readyz endpoint ensures that all components are in the RUNNING state and healthy. + +If any component reports an error or enters the terminating state, it is treated as a failure. + ### Liveness -The `/healthz` endpoint +The /healthz endpoint verifies that all components are functioning properly. It also treats the terminating state as healthy to support graceful shutdown. + +If any component reports an error, it is treated as a failure. ### Debugging