Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/prometheus] Add config option to allow dropping of untyped metrics #16768

Closed
sky333999 opened this issue Dec 7, 2022 · 20 comments · Fixed by #32605
Closed

[receiver/prometheus] Add config option to allow dropping of untyped metrics #16768

sky333999 opened this issue Dec 7, 2022 · 20 comments · Fixed by #32605
Assignees
Labels
enhancement New feature or request receiver/prometheus Prometheus receiver

Comments

@sky333999
Copy link
Contributor

sky333999 commented Dec 7, 2022

Component(s)

receiver/prometheus

Is your feature request related to a problem? Please describe.

As a result of open-telemetry/opentelemetry-collector#1103, currently, all untyped metrics are converted to gauges and there is no way to determine later on in the pipeline if a metric was originally untyped.

While converting to gauge might be ok in most cases, there could be scenarios when customers instead choose to drop all untyped metrics (for example to filter out any go internal scrape_ metrics).

Describe the solution you'd like

Add a config option that would allow customers to specify if they would like untyped metrics to be dropped.

Describe alternatives you've considered

An alternative could be to introduce a label on the metric to capture the "original" metric type thus allowing customers to then have some sort of a filter processor to drop metrics if needed.

Additional context

No response

@sky333999 sky333999 added enhancement New feature or request needs triage New item requiring triage labels Dec 7, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Dec 7, 2022

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@fatsheep9146 fatsheep9146 added the receiver/prometheus Prometheus receiver label Dec 7, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Dec 7, 2022

Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

dashpole commented Dec 7, 2022

This sounds reasonable to me. Also related is open-telemetry/wg-prometheus#69, which would try to preserve the notion that the metric type was unknown.

@fatsheep9146
Copy link
Contributor

Do you have enough bandwidth to add this config? @dashpole

@kovrus
Copy link
Member

kovrus commented Dec 8, 2022

I am trying to get more familiar with the code base of the Prometheus components. I'd be happy to work on this issue.

Do we want to address this issue with the config option or preserve the notion of unknown type in metric attributes?

@dashpole
Copy link
Contributor

dashpole commented Dec 8, 2022

Do we want to address this issue with the config option or preserve the notion of unknown type in metric attributes?

I think preserving the notion of unknown would be better, but is definitely less straightforward.

If we choose to try and preserve the notion of unknown, we should probably make changes to the conversion spec here before moving forward with the implementation.

Do you have enough bandwidth to add this config?

Not at the moment.

@sky333999
Copy link
Contributor Author

Would memory overhead be also be a consideration for adding an additional attribute for every metric for all consumers of the receiver?

Perhaps that should also be opt-in via a config option? Something along the lines of:

config:
    includeMetaAttributes:
        - originMetricType

Also, one more option is to have ability to both drop untyped metrics and add this additional attribute - not sure if thats an overkill.

@dashpole
Copy link
Contributor

dashpole commented Dec 8, 2022

I would expect only unknown-typed metrics to get an additional attribute. Metrics with a type should be left unmodified.

@khanhntd
Copy link
Contributor

khanhntd commented Dec 8, 2022

Would it be better to not converting to Gauge but instead converting to MetricTypeEmpty? There are several reasons for that:

  • Firstly, there are no processors/exporter that support for unknown-typed. If we convert to Gauge, would it corrupt the data? Its better to let the processors/exporter to set an appropriate actions for unknown-type.
  • Secondly, we will always have an resource attribute of the receiver (e.g awsemfexporter knows the metrics coming from prometheus)
  • Thirdly, its a waste to have addition datapoint attributes and its scale based on metrics (not a good thing for resources usage).

@dashpole
Copy link
Contributor

dashpole commented Dec 8, 2022

We discussed using MetricTypeEmpty around when this issue was first opened. From what I recall, MetricTypeEmpty isn't a "real" OTel metric type. It is the default value of the pdata metric type to indicate that something isn't right, and processors/exporters that are given such a metric are expected to drop it with an error.

Firstly, there are no processors/exporter that support for unknown-typed. If we convert to Gauge, would it corrupt the data? Its better to let the processors/exporter to set an appropriate actions for unknown-type.

Thats because there isn't such a type to reference today :). The prometheus, prometheusremotewrite, and googlemanagedprometheus exporters would make use of such a type if it existed. I'm not sure what you mean about corrupting data. At the time, Gauge seemed like the least bad option, but I can understand that being confusing in some cases.

Secondly, we will always have an resource attribute of the receiver (e.g awsemfexporter knows the metrics coming from prometheus)

Where does the prometheus receiver set that resource attribute, though?

Thirdly, its a waste to have addition datapoint attributes and its scale based on metrics (not a good thing for resources usage).

A single, non-identifying attribute on a (hopefully) small number of metrics shouldn't make much of a difference in terms of increased resource usage. My bigger hesitation is that looking for a particular attribute it seems like poor contract for exporters. Something like a data point flag feels like a better fit...

@Aneurysm9
Copy link
Member

A single, non-identifying attribute on a (hopefully) small number of metrics shouldn't make much of a difference in terms of increased resource usage. My bigger hesitation is that looking for a particular attribute it seems like poor contract for exporters. Something like a data point flag feels like a better fit...

+1

I think a data point flag would be the ideal representation of that information. I'm not sure whether it would have any applicability outside of this use case, though, which may lead to resistance to including a new flag in the spec.

@khanhntd
Copy link
Contributor

Hi @kovrus,
Thanks for contributing and can I ask are there any progress towards the issues? If not, we can pick it up and deliver the sol for the issue since its time sensitive for us.

@khanhntd
Copy link
Contributor

I am working on it. Please assign to me this issue though ! Will have the PR by the end of the day

@fatsheep9146
Copy link
Contributor

If we choose to try and preserve the notion of unknown, we should probably make changes to the conversion spec here before moving forward with the implementation.

@khanhntd thanks for your contribution!
but I think before implementation, you should make changes to the conversion spec as @dashpole suggested.

@khanhntd
Copy link
Contributor

khanhntd commented Dec 21, 2022

Thanks @fatsheep9146 for the recommandation. Yes that is my next TODO (as mentioned in PR). I'm fine with updating the spec first too .

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Feb 20, 2023
@atoulme atoulme removed the needs triage New item requiring triage label Mar 8, 2023
@github-actions
Copy link
Contributor

github-actions bot commented May 8, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label May 8, 2023
@dashpole dashpole removed the Stale label Jul 5, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Sep 4, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Nov 6, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 6, 2023
Copy link
Contributor

github-actions bot commented Jan 5, 2024

This issue has been closed as inactive because it has been stale for 120 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request receiver/prometheus Prometheus receiver
Projects
None yet
7 participants