-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GroupBy array based result rows #8118
Comments
This seems reasonable to me 🤘. How does |
The idea is the row order would be determined solely by the granularity, dimensions, aggregators, and post-aggregators, in the following way:
There wouldn't be headers, callers would be expected to know which element is which based on the above rules. |
Array based result row sounds good to me. I'm wondering this |
It could certainly apply to them, although in the case of topN, it would be potentially wasteful: we'd need to write the timestamp once per result value instead of once per granular time bucket. (Right now, topN the result format is an array of time buckets, each one containing a timestamp + list of result values, and the nesting means the timestamp only needs to be written one time.) |
Would potentially be a nice option for every query type though. I bet for most TopNs granularity is "all" and so the timestamp could be omitted. |
Fixes apache#8118; see that proposal for details. Other than the GroupBy changes, the main other "interesting" classes are: - ResultRow: The array-based result type. - BaseQuery: T is no longer required to be Comparable. - QueryToolChest: Adds "decorateObjectMapper" to enable query-aware serialization and deserialization of result rows (necessary due to their positional nature). - QueryResource: Uses the new decoration functionality. - DirectDruidClient: Also uses the new decoration functionality. - QueryMaker (in Druid SQL): Modifications to read ResultRows. These classes weren't changed, but got some new javadocs: - BySegmentQueryRunner - FinalizeResultsQueryRunner - Query
* GroupBy array-based result rows. Fixes #8118; see that proposal for details. Other than the GroupBy changes, the main other "interesting" classes are: - ResultRow: The array-based result type. - BaseQuery: T is no longer required to be Comparable. - QueryToolChest: Adds "decorateObjectMapper" to enable query-aware serialization and deserialization of result rows (necessary due to their positional nature). - QueryResource: Uses the new decoration functionality. - DirectDruidClient: Also uses the new decoration functionality. - QueryMaker (in Druid SQL): Modifications to read ResultRows. These classes weren't changed, but got some new javadocs: - BySegmentQueryRunner - FinalizeResultsQueryRunner - Query * Adjustments for TC stuff.
Motivation
GroupBy queries internally represent result rows as
MapBasedRow
objects, which have the following two fields:As a result, we need to do relatively expensive Map put and get operations (typically these are HashMaps or LinkedHashMaps) at many points: when rows are first generated after each segment scan, when they are merged on historicals, when they are serialized and deserialized, and then when they are merged again on the broker.
The overhead is especially noticeable when the resultset of the groupBy query is large.
See also #6389.
Proposed changes
ResultRow
class that simply wraps anObject[]
and allows position-based access.ObjectMapper decorateObjectMapper(ObjectMapper, QueryType)
to QueryToolChest, to aid in implementing the compatibility plan described in "Operational impact" below. QueryResource would use it so it could serialize results into either arrays or maps depending on the value ofresultAsArray
. DirectDruidClient would use it so it could deserialize results into ResultRow regardless of whether they originated as ResultRows or MapBasedRows. (By the way, the serialized form of a ResultRow would be a simple JSON array.)Rationale
Some other potential approaches that I considered, and did not go with, include:
org.apache.druid.data.input.Row
(just like MapBasedRow does). The reason for avoiding this is that the interface is all about retrieving fields by name --getRaw(String dimension)
, etc -- and I wanted to do positional access instead.Object[]
instead of a wrapperResultRow
around theObject[]
. It would have saved a little memory, but I thought the benefits of type-safety (it's clear what ResultRow means when it appears in method signatures) and a nicer API would be worth it.Operational impact
The format of data in the query cache would not change.
The wire format of groupBy results would change (this is part of the point of the change) but I plan to do this with no compatibility impact, by adding a new query context flag
resultAsArray
that defaults to false. If false, Druid would use the array-based result rows for in-memory operations, but then convert them to MapBasedRows for serialization purposes, keeping the wire format compatible. If true, Druid would use array-based result rows for serialization too.I'd have brokers always set
resultAsArray
true on queries they send down to historicals. Since we tell cluster operators to update historicals first, that means that by the time the broker is updated, we can assume the historicals will know how to interpret the option. Users would also be able to setresultAsArray
if they want once brokers are updated, and receive array-based results themselves.So, due to the above design, there should be no operational impact.
Test plan
Existing unit tests will cover a lot of this. In addition, I plan to test on live clusters, especially the compatibility stuff.
The text was updated successfully, but these errors were encountered: