Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: http: superfluous response.WriteHeader call #6139

Closed
bandesz opened this issue Oct 15, 2019 · 60 comments
Closed

Error: http: superfluous response.WriteHeader call #6139

bandesz opened this issue Oct 15, 2019 · 60 comments

Comments

@bandesz
Copy link

bandesz commented Oct 15, 2019

Bug Report

What did you see?

Multiple errors of http: superfluous response.WriteHeader call in the logs - during normal operation.

level=info ts=2019-10-14T11:01:57.535Z caller=head.go:666 component=tsdb msg="WAL checkpoint complete" first=90900 last=90995 duration=58.045866315s
level=error ts=2019-10-14T11:02:03.505Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:06.599Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2019-10-14T11:02:07.848Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
[...]

Environment

  • System information:

    busybox Docker image running the Prometheus binary.

  • Prometheus version:

    level=info ts=2019-10-09T12:24:15.591Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.13.0, branch=HEAD, revision=6ea4252299f542669aca11860abc2192bdc7bede)"

@simonpasquier
Copy link
Member

This is strange. Are you able to pinpoint a particular type of HTTP requests that would trigger it?

@kfdm
Copy link

kfdm commented Oct 16, 2019

I've noticed the same error on many of my own servers. I'll see if I can narrow down the trigger as well.

@beorn7
Copy link
Member

beorn7 commented Oct 16, 2019

This could be caused by a bug in prometheus/client_golang addressed here: prometheus/client_golang#634
It is fixed in v1.2.0 (released yesterday), but Prometheus v2.13.0 only uses v1.1.0.

@beorn7
Copy link
Member

beorn7 commented Oct 16, 2019

This is a plausible course of action:

If an endpoint is instrumented with the promhttp middleware and the wrapper response writer implements Flusher, and then Flush is actually called, the underlying response writer will call WriteHeader but the delegator implemented by promhttp will not track that. It will call WriteHeader in its own upon another call that calls WriteHeader if needed. (The above mentioned fix adds the tracking to the delegated Flush call. The reason why the delegator calls WriteHeader itself is that it is needed for the instrumentation to work.)

@Sanjary
Copy link

Sanjary commented Oct 29, 2019

This issue should be closed now as it's fixed in v1.2.0

@beorn7
Copy link
Member

beorn7 commented Oct 29, 2019

We weren't quite sure if client_goleng 1.2.0 was actually addressing this particular issue.
Let's optimistically close this and re-open if people report it to be still ongoing. 😁

@beorn7 beorn7 closed this as completed Oct 29, 2019
@motyla
Copy link

motyla commented Nov 20, 2019

Hi,
I'm getting these messages in version 2.13.1

level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) level=error ts=2019-11-20T09:35:55.341Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=) Nov 20 09:39:49 level=error ts=2019-11-20T09:39:49.328Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

prometheus, version 2.13.1 (branch: HEAD, revision: 6f92ce56053866194ae5937012c1bec40f1dd1d9) build user: root@88e419aa1676 build date: 20191017-13:15:01 go version: go1.13.1

@beorn7 beorn7 reopened this Nov 20, 2019
@beorn7
Copy link
Member

beorn7 commented Nov 20, 2019

Thanks for the stack trace. This might now be in a different codepath, but the issue title still makes sense. Let's re-use it.

@cameronkerrnz
Copy link

This is still an issue with v1.2.1.

See also:
prometheus/snmp_exporter#471
prometheus/client_golang#672

@cstyan
Copy link
Member

cstyan commented Dec 31, 2019

@cameronkerrnz is the version number you've given for Prometheus? 1.2.1?

@beorn7
Copy link
Member

beorn7 commented Jan 1, 2020

I'm pretty sure he is referring to client_golang v1.2.1.

@beorn7
Copy link
Member

beorn7 commented Jan 1, 2020

For context: https://github.com/prometheus/client_golang/releases/tag/v1.2.0 contained a fix for one possible way of superfluous response.WriteHeader calls.

However, what is reported here now must happen in a different way. prometheus/client_golang might or might not be involved.

@zhangzheyu2simple
Copy link

notice the same error, and prometheus exit because of this.

@erdong
Copy link

erdong commented Jan 8, 2020

I had the same problem .

level=error ts=2020-01-08T09:35:48.028Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T09:37:19.845Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T09:54:55.364Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:05:59.829Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:05:59.831Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:19:29.069Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-01-08T10:22:25.500Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

Prometheus version info

level=info ts=2020-01-08T12:08:54.625Z caller=main.go:332 msg="Starting Prometheus" version="(version=2.14.0, branch=HEAD, revision=edeb7a44cbf745f1d8be4ea6f215e79e651bfe19)"
level=info ts=2020-01-08T12:08:54.625Z caller=main.go:333 build_context="(go=go1.13.4, user=root@df2327081015, date=20191111-14:27:12)"

level=info ts=2020-01-08T12:08:54.625Z caller=main.go:334 host_details="(Linux 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 BXPL-K8S-80-18 (none))"

@roidelapluie
Copy link
Member

do you know how to reproduce it?

@chjohnst
Copy link

I am seeing these errors when running a combination of Prometheus v2.13.1 and Thanos v0.8.1. Using a query of max(node_cpu_seconds_total) I can recreate pretty quickly and no data is being returned.

{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.795Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.880Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.170Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.184Z"}

@ServerNinja
Copy link

I am seeing these errors when running a combination of Prometheus v2.13.1 and Thanos v0.8.1. Using a query of max(node_cpu_seconds_total) I can recreate pretty quickly and no data is being returned.

{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.795Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:06.880Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.170Z"}
{"caller":"http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19","component":"web","level":"error","msg":")","ts":"2020-01-15T20:56:09.184Z"}

@chjohnst , I'm curious, when this happens are you seeing high IO/Wait or CPU / performance issues on the instance running prometheus and thanos side-car?

@roidelapluie
Copy link
Member

level=error ts=2020-02-05T21:25:14.544Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:24" msg=)
goroutine 89434 [running]:
runtime/debug.Stack(0xc000566000, 0x4d, 0x0)
        /home/roidelapluie/godist/go/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
        /home/roidelapluie/godist/go/src/runtime/debug/stack.go:16 +0x22
github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader(0xc00e63ee70, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/opentracing-contrib/go-stdlib/nethttp/status-code-tracker.go:25 +0x95
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63ef30, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63ef60, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).WriteHeader(0xc00e63f110, 0xc8)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:58 +0x45
github.com/prometheus/client_golang/prometheus/promhttp.(*responseWriterDelegator).Write(0xc00e63f110, 0xc01190d000, 0x1ad, 0x1000, 0x0, 0x0, 0x0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go:68 +0x9e
bufio.(*Writer).Flush(0xc00dace200, 0xc00903afd8, 0xc00903af28)
        /home/roidelapluie/godist/go/src/bufio/bufio.go:593 +0x75
github.com/prometheus/common/expfmt.MetricFamilyToText.func1(0xc00dace200, 0xc00903b0b0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/text_create.go:85 +0x2f
github.com/prometheus/common/expfmt.MetricFamilyToText(0x7f2e7d96d970, 0xc00ffcfe00, 0xc00c4a19f0, 0x1ad, 0x0, 0x0)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/text_create.go:269 +0x11c0
github.com/prometheus/common/expfmt.NewEncoder.func4(0xc00c4a19f0, 0x1, 0xc005619e20)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:83 +0x3d
github.com/prometheus/common/expfmt.encoder.Encode(0xc00b3992a0, 0xc00c4a19f0, 0x25722c8, 0x28)
        /home/roidelapluie/go/src/github.com/prometheus/prometheus/vendor/github.com/prometheus/common/expfmt/encode.go:36 +0x30

@roidelapluie
Copy link
Member

I reproduce it by interrupting /metrics call in the middle with a tcp reset

@chjohnst
Copy link

chjohnst commented Feb 8, 2020

I may have found the issue, still need to confirm.. when doing a really large query my Thanos backend instance get an OOM condition which I think is causing the error to happen.

@beorn7
Copy link
Member

beorn7 commented Mar 11, 2020

prometheus/client_golang#724 might help with this, even if this is a 3rd party bug and not in any Prometheus code.

@beorn7
Copy link
Member

beorn7 commented Mar 11, 2020

On the other hand, masking a 3rd party bug might not be desired. See discussion on prometheus/client_golang#724 , and feel free to chime in.

@dinesh4747
Copy link

how to fix the client_golang is there any version that can refer to or marked as a fix for this issue?

we had even changed the client_golang to latest version but no luck "http: superfluous response.WriteHeader call" have not stopped and it's kind of recurring every time and it also referring to couple of Opentracing libs while we encounter this superfluous error.

@tiagomeireles
Copy link

I'm also seeing this log. In my case its when Prometheus is under heavy load, it begins to fail liveness checks shortly after.

@sherif84

This comment has been minimized.

@e7h4n

This comment has been minimized.

@BON4
Copy link

BON4 commented Aug 19, 2020

Had this on mux handlerfunc when I was using json.NewEncoder(w).Encode(...) and than w.writeheader. Fixed by changing the order, first writeheader than json thing.

@dinesh4747
Copy link

Just curious does superfluous issue been fixed in any off the latest versions of Prometheus?

@kogent
Copy link

kogent commented Sep 25, 2020

We tracked down one cause of this today.

Not long after bringing up the prometheus process it would hang and sometimes flood with the error message in this thread.

Running the thanos sidecar seemed to accelerate the time to hanging.

However when we looked at the targets (before it hung) we noticed several jobs where thousands of targets were timed out.

We commented those jobs out, and that prom cluster is now running fine.

@beorn7
Copy link
Member

beorn7 commented Sep 28, 2020

I'll tag this for Hacktoberfest, to get more eyes on it. But please note that his won't be easy to figure out.

@surajnarwade
Copy link

I am also facing the same issue,

level=error ts=2020-11-11T08:22:00.001Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-11-11T08:22:00.006Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2020-11-11T08:22:00.010Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

prometheus version: v2.19.1

@shay-berman
Copy link

is there any mitigation for this issue? I am also getting this error in v2.22.1.

@roidelapluie
Copy link
Member

We are still looking for a proper way to reproduce, if you have one, that would be welcome.

@therise3107
Copy link

I get the same error

Client: Docker Engine - Community
 Cloud integration: 1.0.2
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 16:58:31 2020
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:07:04 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

I'm using skaffold

@therise3107
Copy link

I resolved this by going to docker resources and bumping the memory from default 1GB to 5GB.

@dubcl
Copy link

dubcl commented Dec 1, 2020

you change from 1 to 5 directly or you where changing one to one until work?

@therise3107
Copy link

you change from 1 to 5 directly or you where changing one to one until work?

@dubcl I tried 6 then moved to 5 but it could possibly work for less as well; docker crashes while compilling gRPC on 1GB

@shenwan
Copy link

shenwan commented Feb 16, 2021

I got the same error after upgrading Thanos (and sidecar) from 0.14.0 to 0.17.2.
level=error ts=2021-02-16T00:08:35.637Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)

So this is considered harmless, right? Is there any workaround to avoid this log error spew though?

@u-kyou
Copy link

u-kyou commented May 26, 2021

level=error ts=2021-05-26T02:39:44.633Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:39:44.634Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:39:44.634Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:40:06.548Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:40:06.548Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:40:07.409Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:40:07.412Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=warn ts=2021-05-26T02:47:53.040Z caller=main.go:524 msg="Received SIGTERM, exiting gracefully..."
level=info ts=2021-05-26T02:47:53.040Z caller=main.go:547 msg="Stopping scrape discovery manager..."
level=info ts=2021-05-26T02:47:53.044Z caller=main.go:561 msg="Stopping notify discovery manager..."
level=info ts=2021-05-26T02:47:53.044Z caller=main.go:583 msg="Stopping scrape manager..."
level=info ts=2021-05-26T02:47:53.045Z caller=main.go:543 msg="Scrape discovery manager stopped"
level=info ts=2021-05-26T02:47:53.045Z caller=main.go:557 msg="Notify discovery manager stopped"
level=info ts=2021-05-26T02:48:05.646Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."
level=info ts=2021-05-26T02:48:05.646Z caller=main.go:577 msg="Scrape manager stopped"
level=info ts=2021-05-26T02:48:05.646Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"
level=error ts=2021-05-26T02:51:30.017Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:51:30.017Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:51:30.019Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:51:30.019Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
...
level=error ts=2021-05-26T02:57:14.403Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:57:43.046Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
level=error ts=2021-05-26T02:57:43.459Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:19" msg=)
ts=2021-05-26T02:57:43.687Z caller=dedupe.go:112 component=remote level=info remote_name=cd490a url=http://xxxremote_write addressxxxx/ msg="Stopping remote storage..."
ts=2021-05-26T02:57:43.688Z caller=dedupe.go:112 component=remote level=info remote_name=cd490a url=http://xxxremote_write addressxxxx/ msg="WAL watcher stopped" queue=cd490a

I got the same error and Received SIGTERM
prometheus version: v2.19.2

@hbo88
Copy link

hbo88 commented Jun 3, 2021

the same issue here after upgrading prometheus to v2.27.0 , Thanos (Sidecar): v0.20.2 ... with a memory spike and a restart.
is there any issue on these releases ?

@ChaoHsupin
Copy link

When I encounter the same error log, the first query will slow:
image

And I saw that there is an increase in this period:

image

@zakiharis
Copy link

I got the same error just now

prometheus:v2.27.1
thanos:v0.20.2

level=error ts=2021-06-10T03:04:22.256Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=error ts=2021-06-10T03:04:22.259Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=warn ts=2021-06-10T03:04:22.199Z caller=main.go:653 msg="Received SIGTERM, exiting gracefully..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:676 msg="Stopping scrape discovery manager..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:690 msg="Stopping notify discovery manager..."
level=info ts=2021-06-10T03:04:22.278Z caller=main.go:712 msg="Stopping scrape manager..."
level=info ts=2021-06-10T03:04:22.279Z caller=main.go:672 msg="Scrape discovery manager stopped"
level=info ts=2021-06-10T03:04:22.279Z caller=main.go:686 msg="Notify discovery manager stopped"
level=error ts=2021-06-10T03:04:22.393Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=error ts=2021-06-10T03:04:22.765Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=info ts=2021-06-10T03:04:23.284Z caller=main.go:957 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=error ts=2021-06-10T03:04:24.828Z caller=stdlib.go:89 component=web caller="http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17" msg=)
level=info ts=2021-06-10T03:04:25.284Z caller=main.go:706 msg="Scrape manager stopped"
level=info ts=2021-06-10T03:04:25.288Z caller=main.go:988 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=2.003510045s remote_storage=2.496µs web_handler=5.067µs query_engine=1.253µs scrape=1.951652936s scrape_sd=1.64614ms notify=19.449µs notify_sd=13.61µs rules=10.101µs
level=info ts=2021-06-10T03:04:25.288Z caller=manager.go:934 component="rule manager" msg="Stopping rule manager..."
level=info ts=2021-06-10T03:04:25.288Z caller=manager.go:944 component="rule manager" msg="Rule manager stopped"
level=error ts=2021-06-10T03:04:28.262Z caller=db.go:780 component=tsdb msg="compaction failed" err="compact head: persist head block: 2 errors: populate block: add symbol: context canceled; context canceled"
level=info ts=2021-06-10T03:04:28.369Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
level=info ts=2021-06-10T03:04:28.369Z caller=main.go:885 msg="Notifier manager stopped"
level=info ts=2021-06-10T03:04:28.370Z caller=main.go:897 msg="See you next time!"

Metricbeat showed high CPU usage on thanos-querier

image

@ChaoHsupin
Copy link

试着拆分 prometheus 实例 ,或调大 cpu 解决问题了

@renchili
Copy link

试着拆分 prometheus 实例 ,或调大 cpu 解决问题了

please use ascii en as formal language in global project thanks

1 how to split prometheus while when we need do promQL with agv or a / b.
2 increasing cpu by how i already running on a 64core silver brand prometheus which is very expensive. needs higher clock frequency or more cores? or needs more slots? your answer is too fuzzy and too wide. not help to this topic

@renchili
Copy link

how to fix the client_golang is there any version that can refer to or marked as a fix for this issue?

we had even changed the client_golang to latest version but no luck "http: superfluous response.WriteHeader call" have not stopped and it's kind of recurring every time and it also referring to couple of Opentracing libs while we encounter this superfluous error.

Thanks a lot for your reply, this issue seems is due to golang language nature behaviour needs to improve from side operation system level, provide more i/o time to prometheus service when load is too high.

This means something hit the bottle neck of golang programming but not yet be corrected

@wbh1
Copy link
Contributor

wbh1 commented Jan 3, 2022

I'm seeing this error also with Prometheus (v2.32.0) and Thanos (v0.22.0). It always coincides with a large dropoff in traffic from my Prometheus server (presumably scrapes stop happening? There's gaps in my graphs from this server during those times), and its HA pair fails to scrape it, as well.

Thanos Sidecar has heartbeat timeouts during this period to http://localhost:9090/api/v1/status/config so it seems that Prometheus just stops responding to any HTTP requests (even the metric handler, as seen above) when these log entries are generated.

I can consistently replicate the issue in my Thanos + Prometheus set up by setting the --store.response-timeout flag of Thanos Query to something absurdly low (e.g. 1s) and then running a query that will take longer than that (e.g. max(node_cpu_seconds_total)). The cancelled context results in the log line we're seeing.

Unfortunately, I think this indicates that this is more of a symptom than the cause of an issue. In other words, my issues described above almost certainly aren't caused by whatever's causing this error to print. Instead, some other issue is happening which is causing requests to be interrupted (probably by timeouts) and then this log line gets printed.

@beorn7
Copy link
Member

beorn7 commented Mar 26, 2024

Hello from the bug scrub.

Apparently, nobody who has the sufficient debugging skills has been able to reproduce this issue.
It remains mysterious. Also, no updates in the last two years. We'll close this now, but please follow up with more reports if you still see this.

@beorn7 beorn7 closed this as completed Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests