Skip to content

Add service startup failure diagnostics#39

Merged
Oddly merged 3 commits intomainfrom
fix/startup-diagnostics
Mar 4, 2026
Merged

Add service startup failure diagnostics#39
Oddly merged 3 commits intomainfrom
fix/startup-diagnostics

Conversation

@Oddly
Copy link
Copy Markdown
Owner

@Oddly Oddly commented Mar 1, 2026

When a service fails to start — bad config, missing certs, invalid heap, whatever — the roles previously waited for a blind port timeout (600s for ES, 300s for Kibana) before giving a generic error. The actual cause was buried in the journal on the remote host.

Now every role's restart handler delegates to a restart_and_verify task file that wraps the restart in a block/rescue. If the service module fails or the service dies shortly after starting, the rescue collects the last 50 lines of journalctl output and fails immediately with the actual error message. The ES and Kibana port waits also got the same treatment — they now check both service health and port availability on each retry iteration, failing fast with diagnostics if the service dies mid-wait instead of burning through the full timeout.

The elasticsearch_diagnostics molecule scenario validates both paths: normal restart works transparently, and a deliberately bad config (bogus setting injected into elasticsearch.yml) triggers a fast failure whose error message includes the journal output. Runs on a single distro since it's testing generic systemd plumbing, not distro-specific behavior.

Also fixes the KICS workflow's SARIF upload permission (needed security-events: write after the blanket contents: read addition) and moves the upgrade test from push-to-main to a daily schedule so it no longer runs only after merge.

Oddly added 3 commits March 4, 2026 16:09
When a service fails to start (bad config, missing certs, etc.), the
roles now detect the failure within seconds and surface the actual error
from journalctl instead of waiting for a blind port timeout. Each role's
restart handler delegates to a restart_and_verify task file that wraps
the restart in a block/rescue — if the service module fails or the
service dies shortly after, the rescue collects the last 50 journal
lines and fails with a clear diagnostic message.

The ES and Kibana port waits are replaced with a smarter loop that
checks both service health and port availability on each iteration,
failing fast with diagnostics if the service dies mid-wait.

Includes an elasticsearch_diagnostics molecule scenario that validates
both the transparent happy path and the fast-failure path by injecting a
bogus setting and asserting that the failure message contains log output.
The hand-rolled integration test suite targets specific Proxmox
infrastructure and shouldn't be tracked in the repo.
Kibana opens port 5601 during its Preboot phase (~8 seconds after
start) but takes 1-2 minutes to serve HTTP. Port-based wait_for checks
pass during Preboot, leaving Kibana unable to serve requests when
subsequent tasks or verify scripts run.

Changed all three Kibana wait locations to check for HTTP 200/401 on
/api/status instead of using ss/wait_for on the port:
- roles/kibana/tasks/main.yml (smart watchdog)
- roles/kibana/tasks/restart_and_verify_kibana.yml (handler verify)
- roles/elasticsearch/handlers/restart_kibana.yml (ES cert handler)

The ES role's restart_kibana handler also gets full diagnostics: if
Kibana fails to start after the cert restart, the handler collects
journalctl output and fails with an actionable message instead of
silently leaving Kibana down.
@Oddly Oddly force-pushed the fix/startup-diagnostics branch from 5da7a1f to ac6f33b Compare March 4, 2026 15:09
@Oddly Oddly merged commit f38144a into main Mar 4, 2026
24 checks passed
@Oddly Oddly deleted the fix/startup-diagnostics branch March 4, 2026 19:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant