Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upLimit chunked requests sent to OpenTSDB #865
Comments
This comment has been minimized.
This comment has been minimized.
|
We never got to test this on a real OpenTSDB (only a test one on a single node without Hadoop/HDFS etc.). So any contributions around making it work better are welcome. I don't think we had to set The only thing we have right now is a hardcoded setting for how many samples we send at most in a single request to a remote storage: https://github.com/prometheus/prometheus/blob/master/storage/remote/queue_manager.go#L28-L29. That doesn't give you any guarantee about the number of bytes though. Theoretically you could even have arbitrarily large time series names for a single sample in Prometheus, but that's a pathological edge case. But anyways, I'm wondering, if this is simply about HTTP chunking, shouldn't that be handled automatically be the HTTP layer? Actually, I found this: http://golang.org/pkg/net/http/httputil "The http package adds chunking automatically if handlers don't set a Content-Length header." Maybe we just need to set a the |
This comment has been minimized.
This comment has been minimized.
|
The way I understand it from my quick glance around, OpenTSDB requires chunking for the HTTP transport (no such limit on the telnet transport apparently, though I know nothing about it except that it exists), and it will not accept chunks bigger than 4K by default. I 'fixed' this by setting |
This comment has been minimized.
This comment has been minimized.
|
Hmm, if it requires chunking, why is it a boolean on/off flag then? Chunking is an HTTP term, so it wouldn't be related to the telnet protocol. We should definitely try setting Content-Length headers and turning off chunking in OpenTSDB again to see if that combination works well. |
This comment has been minimized.
This comment has been minimized.
|
I'm getting more confused now. I just had a Prometheus instance send OpenTSDB samples to a simple netcat endpoint for testing, and it seems it's not actually chunking the request at all, and sending the right Content-Length header:
|
This comment has been minimized.
This comment has been minimized.
|
Ah ha! http://stackoverflow.com/questions/27841071/what-is-a-chunked-request-in-opentsdb Because netty, apparently. |
This comment has been minimized.
This comment has been minimized.
|
So, this is not at all a prometheus thing, just an OpenTSDB + Netty thing, sorry for wasting your time. |
swsnider
closed this
Jun 30, 2015
This comment has been minimized.
This comment has been minimized.
|
Oh. Not wasting time at all - good to know about this issue in case someone else runs into it in the future! |
tsuna
referenced this issue
Jul 22, 2015
Open
Large API requests with Content-Length header are mistakenly treated as chunked requests #539
This comment has been minimized.
This comment has been minimized.
tsuna
commented
Jul 22, 2015
|
This sounds like a bug in OpenTSDB, I filed the bug above so we track and fix this. |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
swsnider commentedJun 30, 2015
I've figured out that in order to make prometheus work with OpenTSDB, I have to set opentsdb-url, and I have to set
tsd.http.request.enable_chunked=trueon the OpenTSDB end. However, now I have a problem where I see a bunch of log messages from OpenTSDB that Prometheus tried to shove a larger chunk then is default (4096) down the pipe. Before I just set this randomly high, shouldn't prometheus provide a lever to limit the amount of data it sends per request to match the OpenTSDB setting?