Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the telemetry collection of Logstash with metricbeat monitoring. #182304

Conversation

mashhurs
Copy link
Contributor

@mashhurs mashhurs commented May 1, 2024

Summary

Telemetry data collection is broken for Logstash, monitoring with metricbeat. This PR change covers following issues faced:

1) Resolve cluster UUID

2) type field mismatch in (especially in state) queries, also collapse field

  • metricbeat sends the state event with metricset.name:node and state fetch query doesn't care about this condition, instead uses legacy type:logstash_state condition which is incorrect.
  • in the queries, collapse field is not correct: it is due to data shape change from legacy to metricbeat monitoring and queries are tightly coupled with legacy one (1, 2, 3)
  • in the queries, filter_path is also not correct: in both state query and stats query

3) Logstash state data frequency

  • metricbeat sends state event (_node/stats/pipeline?graph=true) data once but legacy frequently sends. KB telemetry fetcher queries against ES with latest update period where state data will not be available. It might be a reasonable design which results network efficiency. If that's the case, we should decide the expectation
    • to still collect: we have to tune the query to collect the state data.
    • leave it as an empty assuming state didn't change (personally, I would not align with this option since collecting once it a data loss risky)

Checklist

Delete any items that are not applicable to this PR.

Risk Matrix

Delete this section if it is not applicable to this PR.

Before closing this PR, invite QA, stakeholders, and other developers to identify risks that should be tested prior to the change/feature release.

When forming the risk matrix, consider some of the following examples and how they may potentially impact the change:

Risk Probability Severity Mitigation/Notes
Multiple Spaces—unexpected behavior in non-default Kibana Space. Low High Integration tests will verify that all features are still supported in non-default Kibana Space and when user switches between spaces.
Multiple nodes—Elasticsearch polling might have race conditions when multiple Kibana nodes are polling for the same tasks. High Low Tasks are idempotent, so executing them multiple times will not result in logical error, but will degrade performance. To test for this case we add plenty of unit tests around this logic and document manual testing procedure.
Code should gracefully handle cases when feature X or plugin Y are disabled. Medium High Unit tests will verify that any feature flag or plugin combination still results in our service operational.
See more potential risk examples

For maintainers

@mashhurs mashhurs self-assigned this May 1, 2024
@apmmachine
Copy link
Contributor

🤖 GitHub comments

Expand to view the GitHub comments

Just comment with:

  • /oblt-deploy : Deploy a Kibana instance using the Observability test environments.
  • run docs-build : Re-trigger the docs validation. (use unformatted text in the comment!)

Copy link
Member

@afharo afharo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking a look at this! We needed someone with knowledge about the new shape of the data to fix this. 🧡

It Looks Great To Me! My only comment is: can we normalize the response of get_es_stats so that it doesn't leak out to other plugins and functions?

@mashhurs mashhurs marked this pull request as ready for review May 5, 2024 14:44
@mashhurs mashhurs requested review from a team as code owners May 5, 2024 14:44
@mashhurs mashhurs force-pushed the telemetry-fix-for-logstash-with-metricbeat-monitoring branch from 8ba410c to c20f3f9 Compare May 5, 2024 14:49
@mashhurs mashhurs requested a review from afharo May 5, 2024 14:51
@mashhurs
Copy link
Contributor Author

mashhurs commented May 5, 2024

FYI: I have updated the unit test cases which align with current changes, wil try to add for metricbeat.

Copy link
Member

@afharo afharo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for bearing with me and explaining the reasoning behind the changes.

FWIW, my main concern is that we're leaking code outside the monitoring plugin. Other than that, all looking good aside from the global logstash request now being moved inside the loop.

@mashhurs mashhurs force-pushed the telemetry-fix-for-logstash-with-metricbeat-monitoring branch 2 times, most recently from 152d6a8 to f86444c Compare May 7, 2024 23:55
@afharo
Copy link
Member

afharo commented May 8, 2024

Hmm, the failing tests indicate that we're somehow not returning the monitoring data... Maybe the new query is failing?

UPDATE: Found it in the logs:

[00:00:06]           │ proc [kibana] [2024-05-08T00:55:24.693+00:00][WARN ][plugins.usageCollection.usage-collection.collector-set] ResponseError: search_phase_execution_exception
[00:00:06]           │ proc [kibana] 	Caused by:
[00:00:06]           │ proc [kibana] 		illegal_argument_exception: no mapping found for `logstash.node.stats.logstash.uuid` in order to collapse on
[00:00:06]           │ proc [kibana] 	Root causes:
[00:00:06]           │ proc [kibana] 		illegal_argument_exception: no mapping found for `logstash.node.stats.logstash.uuid` in order to collapse on
[00:00:06]           │ proc [kibana]     at KibanaTransport.request (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@elastic/transport/lib/Transport.js:492:27)
[00:00:06]           │ proc [kibana]     at processTicksAndRejections (node:internal/process/task_queues:95:5)
[00:00:06]           │ proc [kibana]     at KibanaTransport.request (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)
[00:00:06]           │ proc [kibana]     at ClientTraced.SearchApi [as search] (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@elastic/elasticsearch/lib/api/api/search.js:66:12)
[00:00:06]           │ proc [kibana]     at fetchLogstashStats (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/monitoring-plugin/server/telemetry_collection/get_logstash_stats.js:225:19)
[00:00:06]           │ proc [kibana]     at getLogstashStats (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/monitoring-plugin/server/telemetry_collection/get_logstash_stats.js:312:5)
[00:00:06]           │ proc [kibana]     at async Promise.all (index 2)
[00:00:06]           │ proc [kibana]     at getAllStats (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/monitoring-plugin/server/telemetry_collection/get_all_stats.js:34:49)
[00:00:06]           │ proc [kibana]     at async Promise.all (index 1)
[00:00:06]           │ proc [kibana]     at Collector.fetch (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/monitoring-plugin/server/telemetry_collection/register_monitoring_telemetry_collection.js:227:33)
[00:00:06]           │ proc [kibana]     at CollectorSet.fetchCollector (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/usage-collection-plugin/server/collector/collector_set.js:141:24)
[00:00:06]           │ proc [kibana]     at fetch_monitoringTelemetry (/var/lib/buildkite-agent/builds/kb-n2-4-spot-a9d9a28162911021/elastic/kibana-pull-request/kibana-build-xpack/node_modules/@kbn/usage-collection-plugin/server/collector/collector_set.js:175:103) {"service":{"node":{"roles":["background_tasks","ui"]}}}

@mashhurs mashhurs force-pushed the telemetry-fix-for-logstash-with-metricbeat-monitoring branch from f86444c to 1701038 Compare May 8, 2024 18:21
@mashhurs
Copy link
Contributor Author

mashhurs commented May 8, 2024

@elasticmachine merge upstream

@mashhurs mashhurs requested a review from afharo May 8, 2024 23:08
Copy link
Member

@afharo afharo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! This is great! Thanks for such an effort!

@mashhurs
Copy link
Contributor Author

mashhurs commented May 8, 2024

LGTM! This is great! Thanks for such an effort!

Thank you so much @afharo. This happened because of your huge help, appreciate!

@mashhurs mashhurs enabled auto-merge (squash) May 8, 2024 23:25
@mashhurs mashhurs disabled auto-merge May 8, 2024 23:25
@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Unknown metric groups

ESLint disabled line counts

id before after diff
monitoring 18 20 +2

Total ESLint disabled count

id before after diff
monitoring 25 27 +2

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @mashhurs

@mashhurs mashhurs enabled auto-merge (squash) May 9, 2024 00:55
@mashhurs mashhurs merged commit 26f6977 into elastic:main May 9, 2024
17 checks passed
@kibanamachine kibanamachine added v8.15.0 backport:skip This commit does not require backporting labels May 9, 2024
@mashhurs
Copy link
Contributor Author

@afharo, @neptunian can we please backport this change to upcoming 8.14.x releases?

@afharo afharo added backport:prev-minor Backport to the previous minor version (i.e. one version back from main) and removed backport:skip This commit does not require backporting labels May 13, 2024
@afharo
Copy link
Member

afharo commented May 13, 2024

I've added the appropriate label to back port this PR to the previous minor.

Did the same with #182857

Hopefully, our kibanamachine bot backports them for us.

kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request May 13, 2024
…lastic#182304)

## Summary

Telemetry data collection is broken for Logstash, monitoring with
metricbeat. This PR change covers following issues faced:

**1) Resolve cluster UUID**
- With self monitoring, KB creates `.monitoring-es*` index with mapping
`type` field and defaults to
[`type:cluster_state`](https://github.com/elastic/kibana/blob/main/packages/kbn-apm-synthtrace-client/src/lib/monitoring/cluster_stats.ts#L25)
key-value. It uses [`type:cluster_state` condition when fetching cluster
UUID](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_cluster_uuids.ts#L48).
- However, with metricbeat, this [_`type` field doesn't exist_ under
mapping](https://github.com/elastic/beats/blob/v8.13.2/metricbeat/module/elasticsearch/_meta/fields.yml)
which metricbeat creates, so cluster UUID will not be resolved as query
is wrong (results empty output).

**2) `type` field mismatch in (especially in _state_) queries, also
collapse field**
- metricbeat sends the _state_ event with `metricset.name:node` and
state fetch query doesn't care about this condition, instead uses
[legacy `type:logstash_state`
condition](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L349)
which is incorrect.
- in the queries, `collapse` field is not correct: it is due to data
shape change from legacy to metricbeat monitoring and queries are
tightly coupled with legacy one
([1](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L355),
[2](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L346),
[3](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L301-L302))
- in the queries, `filter_path` is also not correct: in both [_state_
query](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L332)
and [_stats_
query](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L273)

**3) Logstash state data frequency**
- metricbeat sends _state_ event (_node/stats/pipeline?graph=true) data
once but legacy frequently sends. KB telemetry fetcher [queries against
ES with latest update
period](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L343-L344)
where state data will not be available. It might be a reasonable design
which results network efficiency. If that's the case, we should decide
the expectation
- to still collect: we have to tune the query to collect the state data.
- leave it as an empty assuming state didn't change (personally, I would
not align with this option since collecting once it a data loss risky)

---------

Co-authored-by: Alejandro Fernández Haro <afharo@gmail.com>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
(cherry picked from commit 26f6977)
@kibanamachine
Copy link
Contributor

💚 All backports created successfully

Status Branch Result
8.14

Note: Successful backport PRs will be merged automatically after passing CI.

Questions ?

Please refer to the Backport tool documentation

kibanamachine added a commit that referenced this pull request May 13, 2024
…oring. (#182304) (#183331)

# Backport

This will backport the following commits from `main` to `8.14`:
- [Fix the telemetry collection of Logstash with metricbeat monitoring.
(#182304)](#182304)

<!--- Backport version: 9.4.3 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT
[{"author":{"name":"Mashhur","email":"99575341+mashhurs@users.noreply.github.com"},"sourceCommit":{"committedDate":"2024-05-09T01:19:11Z","message":"Fix
the telemetry collection of Logstash with metricbeat monitoring.
(#182304)\n\n## Summary\r\n\r\nTelemetry data collection is broken for
Logstash, monitoring with\r\nmetricbeat. This PR change covers following
issues faced:\r\n\r\n**1) Resolve cluster UUID**\r\n- With self
monitoring, KB creates `.monitoring-es*` index with mapping\r\n`type`
field and defaults
to\r\n[`type:cluster_state`](https://github.com/elastic/kibana/blob/main/packages/kbn-apm-synthtrace-client/src/lib/monitoring/cluster_stats.ts#L25)\r\nkey-value.
It uses [`type:cluster_state` condition when fetching
cluster\r\nUUID](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_cluster_uuids.ts#L48).\r\n-
However, with metricbeat, this [_`type` field doesn't exist_
under\r\nmapping](https://github.com/elastic/beats/blob/v8.13.2/metricbeat/module/elasticsearch/_meta/fields.yml)\r\nwhich
metricbeat creates, so cluster UUID will not be resolved as query\r\nis
wrong (results empty output).\r\n\r\n**2) `type` field mismatch in
(especially in _state_) queries, also\r\ncollapse field**\r\n-
metricbeat sends the _state_ event with `metricset.name:node`
and\r\nstate fetch query doesn't care about this condition, instead
uses\r\n[legacy
`type:logstash_state`\r\ncondition](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L349)\r\nwhich
is incorrect.\r\n- in the queries, `collapse` field is not correct: it
is due to data\r\nshape change from legacy to metricbeat monitoring and
queries are\r\ntightly coupled with legacy
one\r\n([1](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L355),\r\n[2](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L346),\r\n[3](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L301-L302))\r\n-
in the queries, `filter_path` is also not correct: in both
[_state_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L332)\r\nand
[_stats_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L273)\r\n\r\n**3)
Logstash state data frequency** \r\n- metricbeat sends _state_ event
(_node/stats/pipeline?graph=true) data\r\nonce but legacy frequently
sends. KB telemetry fetcher [queries against\r\nES with latest
update\r\nperiod](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L343-L344)\r\nwhere
state data will not be available. It might be a reasonable
design\r\nwhich results network efficiency. If that's the case, we
should decide\r\nthe expectation\r\n- to still collect: we have to tune
the query to collect the state data.\r\n- leave it as an empty assuming
state didn't change (personally, I would\r\nnot align with this option
since collecting once it a data loss
risky)\r\n\r\n---------\r\n\r\nCo-authored-by: Alejandro Fernández Haro
<afharo@gmail.com>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"26f6977aa126b129a2f7a5cb8f693618c0ae9e80","branchLabelMapping":{"^v8.15.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","backport:prev-minor","apm:review","8.14
candidate","v8.15.0"],"title":"Fix the telemetry collection of Logstash
with metricbeat
monitoring.","number":182304,"url":"#182304
the telemetry collection of Logstash with metricbeat monitoring.
(#182304)\n\n## Summary\r\n\r\nTelemetry data collection is broken for
Logstash, monitoring with\r\nmetricbeat. This PR change covers following
issues faced:\r\n\r\n**1) Resolve cluster UUID**\r\n- With self
monitoring, KB creates `.monitoring-es*` index with mapping\r\n`type`
field and defaults
to\r\n[`type:cluster_state`](https://github.com/elastic/kibana/blob/main/packages/kbn-apm-synthtrace-client/src/lib/monitoring/cluster_stats.ts#L25)\r\nkey-value.
It uses [`type:cluster_state` condition when fetching
cluster\r\nUUID](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_cluster_uuids.ts#L48).\r\n-
However, with metricbeat, this [_`type` field doesn't exist_
under\r\nmapping](https://github.com/elastic/beats/blob/v8.13.2/metricbeat/module/elasticsearch/_meta/fields.yml)\r\nwhich
metricbeat creates, so cluster UUID will not be resolved as query\r\nis
wrong (results empty output).\r\n\r\n**2) `type` field mismatch in
(especially in _state_) queries, also\r\ncollapse field**\r\n-
metricbeat sends the _state_ event with `metricset.name:node`
and\r\nstate fetch query doesn't care about this condition, instead
uses\r\n[legacy
`type:logstash_state`\r\ncondition](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L349)\r\nwhich
is incorrect.\r\n- in the queries, `collapse` field is not correct: it
is due to data\r\nshape change from legacy to metricbeat monitoring and
queries are\r\ntightly coupled with legacy
one\r\n([1](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L355),\r\n[2](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L346),\r\n[3](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L301-L302))\r\n-
in the queries, `filter_path` is also not correct: in both
[_state_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L332)\r\nand
[_stats_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L273)\r\n\r\n**3)
Logstash state data frequency** \r\n- metricbeat sends _state_ event
(_node/stats/pipeline?graph=true) data\r\nonce but legacy frequently
sends. KB telemetry fetcher [queries against\r\nES with latest
update\r\nperiod](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L343-L344)\r\nwhere
state data will not be available. It might be a reasonable
design\r\nwhich results network efficiency. If that's the case, we
should decide\r\nthe expectation\r\n- to still collect: we have to tune
the query to collect the state data.\r\n- leave it as an empty assuming
state didn't change (personally, I would\r\nnot align with this option
since collecting once it a data loss
risky)\r\n\r\n---------\r\n\r\nCo-authored-by: Alejandro Fernández Haro
<afharo@gmail.com>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"26f6977aa126b129a2f7a5cb8f693618c0ae9e80"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v8.15.0","branchLabelMappingKey":"^v8.15.0$","isSourceBranch":true,"state":"MERGED","url":"#182304
the telemetry collection of Logstash with metricbeat monitoring.
(#182304)\n\n## Summary\r\n\r\nTelemetry data collection is broken for
Logstash, monitoring with\r\nmetricbeat. This PR change covers following
issues faced:\r\n\r\n**1) Resolve cluster UUID**\r\n- With self
monitoring, KB creates `.monitoring-es*` index with mapping\r\n`type`
field and defaults
to\r\n[`type:cluster_state`](https://github.com/elastic/kibana/blob/main/packages/kbn-apm-synthtrace-client/src/lib/monitoring/cluster_stats.ts#L25)\r\nkey-value.
It uses [`type:cluster_state` condition when fetching
cluster\r\nUUID](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_cluster_uuids.ts#L48).\r\n-
However, with metricbeat, this [_`type` field doesn't exist_
under\r\nmapping](https://github.com/elastic/beats/blob/v8.13.2/metricbeat/module/elasticsearch/_meta/fields.yml)\r\nwhich
metricbeat creates, so cluster UUID will not be resolved as query\r\nis
wrong (results empty output).\r\n\r\n**2) `type` field mismatch in
(especially in _state_) queries, also\r\ncollapse field**\r\n-
metricbeat sends the _state_ event with `metricset.name:node`
and\r\nstate fetch query doesn't care about this condition, instead
uses\r\n[legacy
`type:logstash_state`\r\ncondition](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L349)\r\nwhich
is incorrect.\r\n- in the queries, `collapse` field is not correct: it
is due to data\r\nshape change from legacy to metricbeat monitoring and
queries are\r\ntightly coupled with legacy
one\r\n([1](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L355),\r\n[2](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L346),\r\n[3](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L301-L302))\r\n-
in the queries, `filter_path` is also not correct: in both
[_state_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L332)\r\nand
[_stats_\r\nquery](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L273)\r\n\r\n**3)
Logstash state data frequency** \r\n- metricbeat sends _state_ event
(_node/stats/pipeline?graph=true) data\r\nonce but legacy frequently
sends. KB telemetry fetcher [queries against\r\nES with latest
update\r\nperiod](https://github.com/elastic/kibana/blob/main/x-pack/plugins/monitoring/server/telemetry_collection/get_logstash_stats.ts#L343-L344)\r\nwhere
state data will not be available. It might be a reasonable
design\r\nwhich results network efficiency. If that's the case, we
should decide\r\nthe expectation\r\n- to still collect: we have to tune
the query to collect the state data.\r\n- leave it as an empty assuming
state didn't change (personally, I would\r\nnot align with this option
since collecting once it a data loss
risky)\r\n\r\n---------\r\n\r\nCo-authored-by: Alejandro Fernández Haro
<afharo@gmail.com>\r\nCo-authored-by: Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"26f6977aa126b129a2f7a5cb8f693618c0ae9e80"}}]}]
BACKPORT-->

Co-authored-by: Mashhur <99575341+mashhurs@users.noreply.github.com>
@miltonhultgren
Copy link
Contributor

@afharo Thank you so much for sharing all your knowledge here and getting this to done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
8.14 candidate apm:review backport:prev-minor Backport to the previous minor version (i.e. one version back from main) release_note:fix v8.14.0 v8.15.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Telemetry missing when Logstash is monitored exclusively by Metricbeat
8 participants