New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uncoveredIntervals can overflow query response header #2108
Comments
that is worrying. |
We can also look into a way of returning metadata thing in the query result itself instead of header. |
Is there any update on this? |
@himanshug I think we should make this feature an optional query flag, and disable it by default until there is a more robust implementation. |
@xvrl SGTM ... let me do a PR to make that configurable. |
So, there are a couple of fixes that need to be done I think.
|
There should be a configuration in jetty to increase the allowed header size as well. Currently there's not a great way to configure jetty options within Druid though. |
so (2) is still needed and (1) needs to be done in parallel. created #2331 to track (1) separately. |
@drcrallen while there is theoretically no limit to http header sizes, I am worried increasing the header size could cause interoperability issues with browsers, proxies, or other http clients querying Druid. Most of those set limits on the size of http headers, which the user may not have much control over. |
Fwiw, I think that the "solution" Druid implements for (1) can be as simple "if the content of the query ocntext map is greater than 2kb, log it and replace it with a UUID that can be used to find the log" |
@xvrl @cheddar @drcrallen pls put comments relevant to (1) in #2331 |
CachingClusteredClient builds up a list of "uncovered intervals", intervals within the query interval that had no data in the underlying segments.
This list is returned as part of the response header in the "X-Druid-Response-Context" field.
If there are a large number of uncovered intervals, the response header may not be large enough to hold this list. On my local system I saw the header buffer had an 8KB size limit.
I noticed this while running a SegmentMetadataQuery on a set of minute-granularity segments that only contained data for even minutes.
e.g.
The text was updated successfully, but these errors were encountered: