Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid metric type error for metric types containing capitals #4602

Closed
benclapp opened this issue Sep 13, 2018 · 7 comments
Closed

Invalid metric type error for metric types containing capitals #4602

benclapp opened this issue Sep 13, 2018 · 7 comments

Comments

@benclapp
Copy link

Bug Report

What did you do?
Upgraded from 2.3.2 to 2.4.0

What did you expect to see?
New features, and existing targets still being scraped successfully.

What did you see instead? Under which circumstances?
It appears all targets that have any capitals in the metric type cause the scrape for this target to fail. For example:

Metric Type Scrape Success
counter Yes
Counter No
COUNTER No

I've scraped the same targets from a 2.3.2 Prometheus and they're being appended just fine. Unfortunately 2.4.0 cannot be rolled back. For reference, we're using this .NET Library.

Environment

  • System information:

    Running in docker, using this image: quay.io/prometheus/prometheus:v2.4.0

  • Prometheus version:

    prometheus, version 2.4.0 (branch: HEAD, revision: 068eaa5)
    build user: root@d84c15ea5e93
    build date: 20180911-10:46:37
    go version: go1.10.3

  • Logs:

level=warn ts=2018-09-13T06:10:37.93652152Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:39.071841437Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:40.851112422Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:41.875560035Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:42.917636496Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:44.068456756Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:45.85053895Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:46.873520059Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:47.919258931Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:49.077866213Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:50.851515285Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:51.884877924Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:52.916790359Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:54.072958435Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:10:55.851243421Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:56.875002334Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:10:57.933876143Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:10:59.087553114Z caller=manager.go:430 component="rule manager" group=node.rules msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=4
level=warn ts=2018-09-13T06:11:00.850462158Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.4.115:9101/metrics msg="append failed" err="invalid metric type \"COUNTER\""
level=warn ts=2018-09-13T06:11:01.944988067Z caller=scrape.go:804 component="scrape manager" scrape_pool=custom_tls_scrape_targets target=https://<removed>/api/metrics msg="append failed" err="invalid metric type \"Gauge\""
level=warn ts=2018-09-13T06:11:02.916852138Z caller=scrape.go:804 component="scrape manager" scrape_pool=kubernetes-pods target=http://10.244.5.181:9095/metrics msg="append failed" err="invalid metric type \"COUNTER\""
@benclapp
Copy link
Author

Also worth mentioning, checking the metrics with promtool packaged with 2.4.0 says all is well.

@brian-brazil
Copy link
Contributor

That's invalid output, only counter is valid. The client library will need to be fixed.

@danielfm
Copy link

Shouldn't the promtool check detect this then?

@brian-brazil
Copy link
Contributor

It should, that's a bug in promtool.

@gouthamve
Copy link
Member

See: prometheus/common#143

@benclapp
Copy link
Author

Sounds fair, I'll close this issue and let the issues raised against the client library and promtool be fixed instead.

@lock
Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants