New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: http: superfluous response.WriteHeader call #6139
Comments
This is strange. Are you able to pinpoint a particular type of HTTP requests that would trigger it? |
I've noticed the same error on many of my own servers. I'll see if I can narrow down the trigger as well. |
This could be caused by a bug in prometheus/client_golang addressed here: prometheus/client_golang#634 |
This is a plausible course of action: If an endpoint is instrumented with the |
This issue should be closed now as it's fixed in v1.2.0 |
We weren't quite sure if client_goleng 1.2.0 was actually addressing this particular issue. |
Hi,
|
Thanks for the stack trace. This might now be in a different codepath, but the issue title still makes sense. Let's re-use it. |
This is still an issue with v1.2.1. See also: |
@cameronkerrnz is the version number you've given for Prometheus? 1.2.1? |
I'm pretty sure he is referring to client_golang v1.2.1. |
For context: https://github.com/prometheus/client_golang/releases/tag/v1.2.0 contained a fix for one possible way of superfluous response.WriteHeader calls. However, what is reported here now must happen in a different way. prometheus/client_golang might or might not be involved. |
notice the same error, and prometheus exit because of this. |
I had the same problem .
Prometheus version info
|
do you know how to reproduce it? |
I am seeing these errors when running a combination of Prometheus v2.13.1 and Thanos v0.8.1. Using a query of max(node_cpu_seconds_total) I can recreate pretty quickly and no data is being returned. {"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.795Z"} |
@chjohnst , I'm curious, when this happens are you seeing high IO/Wait or CPU / performance issues on the instance running prometheus and thanos side-car? |
|
I reproduce it by interrupting /metrics call in the middle with a tcp reset |
I may have found the issue, still need to confirm.. when doing a really large query my Thanos backend instance get an OOM condition which I think is causing the error to happen. |
prometheus/client_golang#724 might help with this, even if this is a 3rd party bug and not in any Prometheus code. |
On the other hand, masking a 3rd party bug might not be desired. See discussion on prometheus/client_golang#724 , and feel free to chime in. |
we had even changed the client_golang to latest version but no luck "http: superfluous response.WriteHeader call" have not stopped and it's kind of recurring every time and it also referring to couple of Opentracing libs while we encounter this superfluous error. |
I'm also seeing this log. In my case its when Prometheus is under heavy load, it begins to fail liveness checks shortly after. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Had this on mux handlerfunc when I was using json.NewEncoder(w).Encode(...) and than w.writeheader. Fixed by changing the order, first writeheader than json thing. |
Just curious does superfluous issue been fixed in any off the latest versions of Prometheus? |
We tracked down one cause of this today. Not long after bringing up the prometheus process it would hang and sometimes flood with the error message in this thread. Running the thanos sidecar seemed to accelerate the time to hanging. However when we looked at the targets (before it hung) we noticed several jobs where thousands of targets were timed out. We commented those jobs out, and that prom cluster is now running fine. |
I'll tag this for Hacktoberfest, to get more eyes on it. But please note that his won't be easy to figure out. |
I am also facing the same issue,
prometheus version: |
is there any mitigation for this issue? I am also getting this error in v2.22.1. |
We are still looking for a proper way to reproduce, if you have one, that would be welcome. |
I get the same error
I'm using |
I resolved this by going to docker resources and bumping the memory from default 1GB to 5GB. |
you change from 1 to 5 directly or you where changing one to one until work? |
@dubcl I tried 6 then moved to 5 but it could possibly work for less as well; docker crashes while compilling gRPC on 1GB |
I got the same error after upgrading Thanos (and sidecar) from 0.14.0 to 0.17.2. So this is considered harmless, right? Is there any workaround to avoid this log error spew though? |
I got the same error and |
the same issue here after upgrading prometheus to v2.27.0 , Thanos (Sidecar): v0.20.2 ... with a memory spike and a restart. |
I got the same error just now prometheus:v2.27.1
Metricbeat showed high CPU usage on thanos-querier |
试着拆分 prometheus 实例 ,或调大 cpu 解决问题了 |
please use ascii en as formal language in global project thanks 1 how to split prometheus while when we need do promQL with agv or a / b. |
Thanks a lot for your reply, this issue seems is due to golang language nature behaviour needs to improve from side operation system level, provide more i/o time to prometheus service when load is too high. This means something hit the bottle neck of golang programming but not yet be corrected |
I'm seeing this error also with Prometheus (v2.32.0) and Thanos (v0.22.0). It always coincides with a large dropoff in traffic from my Prometheus server (presumably scrapes stop happening? There's gaps in my graphs from this server during those times), and its HA pair fails to scrape it, as well. Thanos Sidecar has heartbeat timeouts during this period to I can consistently replicate the issue in my Thanos + Prometheus set up by setting the Unfortunately, I think this indicates that this is more of a symptom than the cause of an issue. In other words, my issues described above almost certainly aren't caused by whatever's causing this error to print. Instead, some other issue is happening which is causing requests to be interrupted (probably by timeouts) and then this log line gets printed. |
Hello from the bug scrub. Apparently, nobody who has the sufficient debugging skills has been able to reproduce this issue. |
Bug Report
What did you see?
Multiple errors of
http: superfluous response.WriteHeader call
in the logs - during normal operation.Environment
System information:
busybox Docker image running the Prometheus binary.
Prometheus version:
level=info ts=2019-10-09T12:24:15.591Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"
The text was updated successfully, but these errors were encountered: