New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial join support #8728
Comments
I am planning to do work towards implementing this proposal over the next few months. Feedback is very welcome. |
I know everyone will be very excited about JOINs but the proposal to have lookups queryable via SQL will enable amazing stuff in the web console! |
Yes that is an exciting direction, but wouldn't it be more efficient to develop a druid connector for presto? |
Hi, @kstrempel . IMO, I don't think this is a problem on the same level. Presto does provide more richer SQL queries, but it does not store data itself, so there will be a process to ingest data from Druid, which will inevitably have some performance loss. And if Presto pushes most of the calculations down to Druid, then Druid still needs to have more SQL query capabilities, which is what the proposal is doing. I think this may be one of the reasons why Spark does not directly borrow the SQL features of Impala or Presto, but implement Spark SQL on its own. |
I think a Druid connector for Presto would help get Druid data into Presto and help support 100% of SQL use cases, so it's interesting. But as @asdf2014 mentioned, doing what we can in Druid directly should mean better performance. |
@gianm that's an exciting announcement and looking forward to it. |
Great to know this news! 👍 can't wait to see this in releases! |
@kstrempel we looked into join support through presto couple years ago. Its possible but presto connector interface did not have API to push down predicates. So Druid would have to forward raw non aggregated data to presto and presto would do all query processing, which compromises all the performance benefits of Druid. Great to see this initiative. |
An initial step towards apache#8728. This patch adds enough functionality to implement a joining cursor on top of a normal datasource. It does not include enough to actually do a query. For that, future patches will need to wire this low-level functionality into the query language.
FYI — I started a design document for implementation at https://gist.github.com/gianm/39548daef74f0373b3c87056e3db4627. I didn't put it into this issue directly, since I think a proposal and design doc have different goals. (The proposal is meant to describe a program for implementing the feature, motivations, considerations, etc; but the design doc is meant to describe what's already implemented.) |
This is a good idea !I look forward to |
|
* Add HashJoinSegment, a virtual segment for joins. An initial step towards #8728. This patch adds enough functionality to implement a joining cursor on top of a normal datasource. It does not include enough to actually do a query. For that, future patches will need to wire this low-level functionality into the query language. * Fixups. * Fix missing format argument. * Various tests and minor improvements. * Changes. * Remove or add tests for unused stuff. * Fix up package locations.
I agree. I haven't thought a ton about how it should work. But I think, ideally, it'd use the new InputFormat and InputSource stuff so users don't have to learn new concepts. It would be great if it used tasks too for even more conceptual-weight-reduction.
I meant the datasource that the user provided in the query. I'll edit to make it clearer.
That is a great idea. I will plan to add that when the SQL part starts getting fleshed out more. |
Builds on apache#9111 and implements the datasource analysis mentioned in apache#8728. Still can't handle join datasources, but we're a step closer. Join-related DataSource types: 1) Add "join", "lookup", and "inline" datasources. 2) Add "getChildren" and "withChildren" methods to DataSource, which will be used in the future for query rewriting (e.g. inlining of subqueries). DataSource analysis functionality: 1) Add DataSourceAnalysis class, which breaks down datasources into three components: outer queries, a base datasource (left-most of the highest level left-leaning join tree), and other joined-in leaf datasources (the right-hand branches of the left-leaning join tree). 2) Add "isConcrete", "isGlobal", and "isCacheable" methods to DataSource in order to support analysis. 3) Use the DataSourceAnalysis methods throughout the query handling stack, replacing various ad-hoc approaches. Most of the interesting changes are in ClientQuerySegmentWalker (brokers), ServerManager (historicals), and SinkQuerySegmentWalker (indexing tasks). Other notes: 1) Changed TimelineServerView to return an Optional timeline, which I thought made the analysis changes cleaner to implement. 2) Renamed DataSource#getNames to DataSource#getTableNames, which I think is clearer. Also, made it a Set, so implementations don't need to worry about duplicates. 3) Added QueryToolChest#canPerformSubquery, which is now used by query entry points to determine whether it is safe to pass a subquery dataSource to the query toolchest. Fixes an issue introduced in apache#5471 where subqueries under non-groupBy-typed queries were silently ignored, since neither the query entry point nor the toolchest did anything special with them. 4) The addition of "isCacheable" should work around apache#8713, since UnionDataSource now returns false for cacheability.
Builds on apache#9111 and implements the datasource analysis mentioned in apache#8728. Still can't handle join datasources, but we're a step closer. Join-related DataSource types: 1) Add "join", "lookup", and "inline" datasources. 2) Add "getChildren" and "withChildren" methods to DataSource, which will be used in the future for query rewriting (e.g. inlining of subqueries). DataSource analysis functionality: 1) Add DataSourceAnalysis class, which breaks down datasources into three components: outer queries, a base datasource (left-most of the highest level left-leaning join tree), and other joined-in leaf datasources (the right-hand branches of the left-leaning join tree). 2) Add "isConcrete", "isGlobal", and "isCacheable" methods to DataSource in order to support analysis. Other notes: 1) Renamed DataSource#getNames to DataSource#getTableNames, which I think is clearer. Also, made it a Set, so implementations don't need to worry about duplicates. 2) The addition of "isCacheable" should work around apache#8713, since UnionDataSource now returns false for cacheability.
* Add join-related DataSource types, and analysis functionality. Builds on #9111 and implements the datasource analysis mentioned in #8728. Still can't handle join datasources, but we're a step closer. Join-related DataSource types: 1) Add "join", "lookup", and "inline" datasources. 2) Add "getChildren" and "withChildren" methods to DataSource, which will be used in the future for query rewriting (e.g. inlining of subqueries). DataSource analysis functionality: 1) Add DataSourceAnalysis class, which breaks down datasources into three components: outer queries, a base datasource (left-most of the highest level left-leaning join tree), and other joined-in leaf datasources (the right-hand branches of the left-leaning join tree). 2) Add "isConcrete", "isGlobal", and "isCacheable" methods to DataSource in order to support analysis. Other notes: 1) Renamed DataSource#getNames to DataSource#getTableNames, which I think is clearer. Also, made it a Set, so implementations don't need to worry about duplicates. 2) The addition of "isCacheable" should work around #8713, since UnionDataSource now returns false for cacheability. * Remove javadoc comment. * Updates reflecting code review. * Add comments. * Add more comments.
@gianm |
@gianm When can we expect a release with Join Support and ANSI SQL compliance? |
Very cool stuff. Curious if this is still actively being worked on. I also wonder if the Imply implementation has been looked at and considered. They may have some findings that would inform the implementation: |
Motivation
Druid aims to be a more powerful analytical database, and implementing joins is a very common ask from the user community. Druid does support some related features today:
WHERE x IN (SELECT ...)
syntax and traditional JOIN syntax. But it only supports one subquery per SQL query, and does not support negation ("NOT IN").Real JOIN support would be more powerful, enabling even more star-schema and subquery-based use cases:
Proposed changes
The idea is to add a "join" datasource, expose it through SQL, and add machinery to brokers and data servers to allow hash-based equijoins of zero or one "table" datasource and any number of "lookup", "inline", or "query" datasources. As a side effect, these proposed changes will add the ability to query lookups directly, using a datasource of type "lookup".
I think this subset of functionality is a good start, because it adds meaningful new capabilities and helps unify existing ones. Things that are not in scope for this initial proposal include: joining two "table" datasources to each other, non-equijoins, non-hash-based joins. I think these should all be implemented at some point, as future work.
There are four main areas to this proposal,
The next few sections expand on these areas.
See also https://gist.github.com/gianm/39548daef74f0373b3c87056e3db4627 for an (incomplete) implementation oriented doc.
SQL
An example SQL query might be:
This query takes advantage of the fact that unqualified tables like sales are assumed to be normal datasources. Lookups are referenced as part of the lookup schema, like lookup.products.
Multiple join queries can be specified per SQL query. We will need to guide Calcite’s cost-based optimizer towards reordering them optimally. This may require more statistics than we currently possess, so adding these and improving join plans may end up being an area of future work.
Native queries
The rows coming out of a join datasource would be the result of the join. Any query type could use a join datasource without being aware of the fact that joins exist.
Join datasources can be nested within each other. Unlike SQL, native query evaluation will not reorder joins. It will execute them in the order that the join tree is provided.
Probably will not allow joining on "table" datasources, except as the extreme left-hand side of a join.
In order to protect against column name ambiguity (what if the left and right side have a column of the same name?), I propose adding a "rightPrefix" parameter to the join datasource. This would be prefixed to every column name coming from the right side, and should be chosen by the caller to be something that won’t conflict with any left-side columns. Druid SQL will choose one automatically when planning a SQL join.
The join datasource used by the earlier SQL query, above, would be:
Broker behavior
The following technique should be implemented by CachingClusteredClient. The idea is to either evaluate the query locally, or else get it into a format that data servers can handle (see next section).
Data server behavior
The following technique should be implemented by ServerManager (historical) and SinkQuerySegmentWalker (indexer) when a join datasource is encountered.
Enumerate all other leaf datasources in depth-first pre-order. We will apply join clauses in this order.
Rationale
Some other alternatives considered were:
Operational impact
None expected.
Test plan
Add lots of unit tests in the druid-sql and druid-processing modules.
Future work
Out of scope for this proposal, but would be nice in the future:
The text was updated successfully, but these errors were encountered: