Skip to content

Conversation

@graphaelli
Copy link
Member

Proposed commit message

add OTLP receiver input package

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.
  • I have verified that any added dashboard complies with Kibana's Dashboard good practices

@graphaelli graphaelli added the enhancement New feature or request label Nov 18, 2025
@andrewkroh andrewkroh added documentation Improvements or additions to documentation. Applied to PRs that modify *.md files. New Integration Issue or pull request for creating a new integration package. labels Nov 18, 2025
@graphaelli
Copy link
Member Author

Unfortunately we end up with signal (log here) transforms that shouldn't be there. Is there any way to eliminate those?

service:
  pipelines:
    logs/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver:
      receivers:
        - otlp/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver
      processors:
        - >-
          transform/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver-routing
    metrics/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver:
      receivers:
        - otlp/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver
      processors:
        - >-
          transform/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver-routing
    traces/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver:
      receivers:
        - otlp/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver
      processors:
        - >-
          transform/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver-routing
processors:
  transform/otlp_receiver-otelcol-otelcol-otlp_receiver-otlp_receiver-routing:
    log_statements:
      - context: log
        statements:
          - set(attributes["data_stream.type"], "logs")
          - set(attributes["data_stream.dataset"], "otlp_receiver")
          - set(attributes["data_stream.namespace"], "default")

btrieger and others added 14 commits November 18, 2025 16:38
…ic#15967)

* Add web proxy event support

* Bump Version

* Add link to changelog

* Add new fields to readme

* Add type to changelog

* Add newlines and modify version to 1.7.0
…ic#16004)

Made with ❤️️ by updatecli

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
… using GuardDuty API (elastic#15858)

Updated wording regarding data duplication issue with Amazon GuardDuty API.

---------

Co-authored-by: Dan Kortschak <dan.kortschak@elastic.co>
… during the split operation and returned as the root object

The Google Workspace Reports API sometimes does not return the `items[]` array, resulting
in the absence of the target field in the `response.split` operation. This leads to the
root level object being returned, which causes failures in the ingest pipeline.

An issue[1] has been created to resolve the problem with the split[].ignore_empty_value operation.

To address this issue as of now, a `drop` processor has been added at the start of the pipeline to ensure
that we discard events that are not required.

Here is the list of affected data streams:

- access_transparency
- admin
- context_aware_access
- device
- drive
- gcp
- group_enterprise
- groups
- login
- rules
- saml
- token
- user_accounts

[1] elastic/beats#47699
)

* Add health_status field to status change logs data stream
* Add processor for health_status field in status_change_logs data stream
* Add agent status alert rules
* Use more specific index for system metrics, remove RLIKE clauses, and fix field used for CPU usage in alerting rules
Adds whitespace normalization for the SidList field in Windows
Security event 4908 (Special Groups Logon table modified). The
ingest pipeline now uses a gsub processor to normalize separators
before parsing, and the Painless script handles the normalized
format correctly.

Test data originates from
elastic/beats@dd7a1b3
@elasticmachine
Copy link

💚 Build Succeeded

History

@graphaelli
Copy link
Member Author

A few other issues, will need to get these over to fleet:

@jsoriano
Copy link
Member

Is there any way to eliminate those?

It is not possible to eliminate these processors with current implementation. These are used to route the data to a data stream matching with the index template managed by Fleet, as configured by users.

As input packages work now, they are expected to write to an specific data stream. The user can configure the dataset and namespace, and Fleet configures the template for them, allowing customizations through the @custom component templates and so on.

  • we end up only with permissions to write to the data type this input requests and there doesn't appear to be a way to write to all of logs + metrics + traces + profiles

Yes, this is the expected behavior for current implementation, each input package policy is only expected to collect one type of data.

can't configure elasticapm as the exporter even though it's a connector - https://github.com/elastic/integrations/pull/16003/files#diff-480324e221eb30e92a88eeeb9a01340a857d6e29d757fc7d9dabf91ae369da6fR36-R38

For Fleet-managed inputs and integrations, the exporters, or the outputs, are expected to be managed by Fleet, and not included in configuration templates.

connectors could work though, it would be interesting to complete the support for them. Even if not possible to define exporters, it should be possible to define connectors and include them in pipelines, but the truth is that I don't think we have tested this. It would create one connector per policy in any case.

Btw, out of curiosity, this elasticapm connector is used to enrich data, why wasn't it implemented as a processor? To ensure that multiple pipelines can reuse the same instance?


This package looks pretty particular, I guess that the idea is to enable the OTLP endpoint and allow the ingestion of any kind of data, that would get routed on ingestion?

I guess that for this case we could add some setting in packages that disables all the logic for index and permission management and the related UI components. It is difficult to estimate the scope of this because many things assume that each policy collects an specific kind of data in an specific data stream, we don't have anything like that yet.

@kpollich
Copy link
Member

kpollich commented Nov 20, 2025

Yes, this is the expected behavior for current implementation, each input package policy is only expected to collect one type of data.

We will need to address this then as a gap. IIUC OTel receivers can collect multiple signal types and we'll want to support document routing in these input packages https://www.elastic.co/docs/reference/edot-collector/components/elasticsearchexporter#document-routing

We could either generate logs_index + traces_index + metrics_index properties for the Elasticsearch exporter in Fleet based on the input package configuration or rely on dynamic routing here. Not sure which is preferred.

@jsoriano
Copy link
Member

Issue created: elastic/package-spec#1023

@graphaelli
Copy link
Member Author

Thank you both.

Btw, out of curiosity, this elasticapm connector is used to enrich data, why wasn't it implemented as a processor? To ensure that multiple pipelines can reuse the same instance?

The elasticapm connector is used to calculate metrics from the various signals. There is a separate elasticapm processor that is used to enrich spans.

Created #16069 to capture requirements for this input since it's going to be more involved than just the pull request. I'll close this and let's move sub issues and further discussion there.

@graphaelli graphaelli closed this Nov 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation. Applied to PRs that modify *.md files. enhancement New feature or request New Integration Issue or pull request for creating a new integration package.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants