Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upRemote read is failing with ‘http: multiple response.WriteHeader calls‘ in remote node #4307
Comments
This comment has been minimized.
This comment has been minimized.
|
Thanks for the report! Hmm, I have tried to reproduce this by pointing a 2.2.1 Prometheus at the remote read endpoint of another 2.2.1 Prometheus, but it is working for me. Proxy config:
And Do you still have the problem with Prometheus 2.3.1 by the way? |
This comment has been minimized.
This comment has been minimized.
|
Thanks for your reply |
This comment has been minimized.
This comment has been minimized.
|
Ah sorry, I didn't understand earlier that Hmm, first I thought the proxy Prometheus ran into a query timeout, but that should read The error message indicates that we write HTTP headers twice, which can happen if you first set some headers, then start writing the body (which automatically writes out the headers first), then try to set some more headers. The only place where I currently see that could happen is in https://github.com/prometheus/prometheus/blob/master/web/api/v1/api.go#L663-L666. Your remote timeout is pretty low with 5s btw. Have you tried setting it to something higher to see if the problem goes away? |
This comment has been minimized.
This comment has been minimized.
|
Thank you very much for helping me analyze the reasons,But I still don't quite understand the mechanism of writing headers twice I also think the possibility of timeout is relatively high. My current approach is to optimize the query statement and reduce the response time and content size. Moreover, I changed the timeout period on the proxy to 30s, and then observed for a period of time |
This comment has been minimized.
This comment has been minimized.
Imagine the following:
So if you set your timeout higher, this problem will hopefully go away. The only thing remaining here would then be to log this error better. |
This comment has been minimized.
This comment has been minimized.
|
Oh,yeah. |
xudawei
closed this
Jul 3, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
xudawei commentedJun 24, 2018
•
edited
Proposal
Use case. Why is this important?
I have used shard data with three prometheus server to collect more than 2000+ nodes , And Use a prometheus server used remote_read as a proxy to support promsql from grafana. Architecture like follow:
Grafana -> Prom(with remote read)(i will call it proxy for this page) -> Prom( three node with shard) -> node-exporter
Nice to have' is not a good use case :)
Bug Report
What did you do?
I curl a url to proxy :
http:// proxy.domain.com//api/v1/series?match[]=upWhat did you expect to see?
Return all active node with their labels.
What did you see instead? Under which circumstances?
Sometimes !!! just Somethings .Got a error return :
context deadline execeededEnvironment
System information:
X86_64 RHEL 7.4
Prometheus version:
Alertmanager version:
Do not use Alertmanager
Prometheus configuration file:
Proxy (Prom with remote_read)
Prom ( one of the node with shard)
Proxy (Prom with remote_read)
Prom ( node with shard)