Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

query: ExemplarQueryResult error #5189

Closed
Reamer opened this issue Feb 28, 2022 · 3 comments · Fixed by #5202
Closed

query: ExemplarQueryResult error #5189

Reamer opened this issue Feb 28, 2022 · 3 comments · Fixed by #5202

Comments

@Reamer
Copy link

Reamer commented Feb 28, 2022

Thanos, Prometheus and Golang version used:

Thanos: quay.io/thanos/thanos:v0.24.0 - go1.16.12
Prometheus: 2.29.2 - go1.16.6
Grafana: grafana/grafana:8.4.2

What happened:
I see the following error in the Grafana log

{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"process_cpu_usage{namespace=\"spark-test\"}","t":"2022-02-28T09:07:46.63+0000"}
{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"sum(zeppelin_note_cache_hit_total{namespace=\"spark-test\"}) / (sum(zeppelin_note_cache_hit_total{namespace=\"spark-test\"}) + sum(zeppelin_note_cache_miss_total{namespace=\"spark-test\"}))","t":"2022-02-28T09:07:46.73+0000"}
{"err":"[]v1.ExemplarQueryResult: decode slice: expect [ or n, but found \u0000, error found in #0 byte of ...||..., bigger context ...||...","logger":"tsdb.prometheus","lvl":"eror","msg":"Exemplar query failed","query":"process_files_open_files{namespace=\"spark-test\"}","t":"2022-02-28T09:07:46.73+0000"}

What you expected to happen:

No erros

How to reproduce it (as minimally and precisely as possible):

It just happens when I open a Grafana dashboard.

Full logs to relevant components:

Logs

level=info ts=2022-02-28T09:04:26.161364912Z caller=client.go:55 msg="enabling client to server TLS" level=info ts=2022-02-28T09:04:26.161683494Z caller=options.go:115 msg="TLS client using provided certificate pool" level=info ts=2022-02-28T09:04:26.161707098Z caller=options.go:148 msg="TLS client authentication enabled" level=info ts=2022-02-28T09:04:26.166055848Z caller=options.go:27 protocol=gRPC msg="disabled TLS, key and cert must be set to enable" level=info ts=2022-02-28T09:04:26.166787409Z caller=query.go:695 msg="starting query node" level=info ts=2022-02-28T09:04:26.166967193Z caller=intrumentation.go:48 msg="changing probe status" status=ready level=info ts=2022-02-28T09:04:26.167250961Z caller=intrumentation.go:60 msg="changing probe status" status=healthy level=info ts=2022-02-28T09:04:26.167328044Z caller=grpc.go:131 service=gRPC/server component=query msg="listening for serving gRPC" address=127.0.0.1:10901 level=info ts=2022-02-28T09:04:26.167341139Z caller=http.go:63 service=http/server component=query msg="listening for requests and metrics" address=127.0.0.1:9090 level=info ts=2022-02-28T09:04:26.16746526Z caller=tls_config.go:195 service=http/server component=query msg="TLS is disabled." http2=false level=info ts=2022-02-28T09:04:31.189964822Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.12:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-0\"}" level=info ts=2022-02-28T09:04:31.190044591Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.59:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190073174Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.62:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-1\"}" level=info ts=2022-02-28T09:04:31.190095015Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.128.10.61:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190121093Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.9:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-0\"}" level=info ts=2022-02-28T09:04:31.190148554Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.131.8.11:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-0\"}"

Anything else we need to know:

I found VictoriaMetrics/VictoriaMetrics#2000, maybe it can help to fix the problem.
If you think this is a Grafana Issue, let me know.

@GiedriusS
Copy link
Member

Thank you for the detailed report! It should probably be enough to create an empty slice here if it is nil:
https://github.com/thanos-io/thanos/blob/main/pkg/exemplars/exemplars.go#L92

Help wanted!

@GiedriusS
Copy link
Member

Or maybe this https://github.com/thanos-io/thanos/blob/main/pkg/exemplars/exemplars.go#L92 would be an even better place

@Reamer
Copy link
Author

Reamer commented Feb 28, 2022

I found grafana/grafana#42749 which seems to trigger this error.

Help wanted!

I'm not a Go developer, unfortunately, so I can't help fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants