Skip to content

Monitoring integrations fail with 403 Forbidden on internal APIs after upgrade to 9.2.0 (Fleet + Elastic Agent) #11037

@ReneMuetterlein

Description

@ReneMuetterlein

We are running an air-gapped RHEL 9.6 system with a locally installed Elastic Stack using self-signed certificates. The system has no internet access and is used exclusively within an internal network.

We recently upgraded from version 9.1.4 to 9.2.0 for the entire stack (Elasticsearch, Kibana, Logstash, and Elastic Agent). After the upgrade, Fleet Server and the Elastic Agent were reinstalled and re-enrolled successfully, and all components report as HEALTHY — except for the monitoring integrations.

The elasticsearch/metrics, kibana/metrics, and http/metrics units consistently fail with 403 Forbidden errors, like so:

elastic-agent status
└─ elastic-agent
   ├─ elasticsearch/metrics-default
   │     └─ elasticsearch.ccr: Forbidden (/_nodes/_local/nodes)
   ├─ http/metrics-default
   │     └─ http.json: Forbidden (/api/task_manager/_background_task_utilization)
   └─ kibana/metrics-default
         └─ kibana.status: Forbidden (/api/status)

✅ What we've tested

  • Created a dedicated user with:
    • remote_monitoring_agent
    • kibana_system
    • custom role with monitor, cluster:monitor/nodes/stats, cluster:monitor/nodes/info, and index access to .monitoring-*, .kibana*, .fleet-*, .logs-*, .metrics-*
  • Verified access with this user using curl and browser: all API endpoints return valid responses.
  • Replaced user/password auth with an explicit API key with matching role descriptors → same result.
  • Reinstalled and re-enrolled the Agent (clean state).
  • Updated monitoring integrations in Kibana and verified credentials are used correctly.

🧩 What we observe

  • The Agent is fully connected and enrolled.
  • The integrations immediately show DEGRADED, usually within 1–2 seconds.
  • The API endpoints failing are:
    • /_nodes/_local/nodes
    • /_nodes/_local/stats/ingest
    • /api/task_manager/_background_task_utilization
    • /api/status

We assume these endpoints are protected by additional internal access controls in 9.2.0, which were either less strict or differently handled in 9.1.4.


✅ Important to note

  • All of this worked flawlessly in 9.1.4
  • The upgrade to 9.2.0 broke it without any config change on our part
  • We could not find any documentation describing permission changes related to monitoring integrations in 9.2.0

💡 Request

Please clarify:

  • Have the monitoring integrations changed how they authenticate or access internal APIs in 9.2.0?
  • Is it now required to use service accounts or specific internal users (elastic, kibana_system, etc.)?
  • Can we use a properly scoped API key instead — and if so, which exact privileges are needed?

Happy to provide sanitized logs or further details if needed.

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions