Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SQL: Remove "useFallback" feature. #7567

Merged
merged 1 commit into from
Apr 29, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/content/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -1424,7 +1424,6 @@ The Druid SQL server is configured through the following properties on the Broke
|`druid.sql.planner.selectThreshold`|Page size threshold for [Select queries](../querying/select-query.html). Select queries for larger resultsets will be issued back-to-back using pagination.|1000|
|`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|true|
|`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
|`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|false|
|`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail|false|
|`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
|`druid.sql.planner.serializeComplexValues`|Whether to serialize "complex" output values, false will return the class name instead of the serialized value.|true|
Expand Down
2 changes: 0 additions & 2 deletions docs/content/querying/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -524,7 +524,6 @@ Connection context can be specified as JDBC connection properties or as a "conte
|`sqlTimeZone`|Sets the time zone for this connection, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|druid.sql.planner.sqlTimeZone on the Broker (default: UTC)|
|`useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|druid.sql.planner.useApproximateCountDistinct on the Broker (default: true)|
|`useApproximateTopN`|Whether to use approximate [TopN queries](topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](groupbyquery.html) will be used instead.|druid.sql.planner.useApproximateTopN on the Broker (default: true)|
|`useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|druid.sql.planner.useFallback on the Broker (default: false)|

### Retrieving metadata

Expand Down Expand Up @@ -725,7 +724,6 @@ The Druid SQL server is configured through the following properties on the Broke
|`druid.sql.planner.metadataRefreshPeriod`|Throttle for metadata refreshes.|PT1M|
|`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|true|
|`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
|`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|false|
|`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail|false|
|`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
|`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST api will be invoked when broker needs published segments info.|false|
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,9 @@
import org.apache.calcite.linq4j.Enumerable;
import org.apache.calcite.linq4j.Enumerator;
import org.apache.calcite.plan.RelOptPlanner;
import org.apache.calcite.plan.RelOptTable;
import org.apache.calcite.plan.RelOptUtil;
import org.apache.calcite.rel.RelNode;
import org.apache.calcite.rel.RelRoot;
import org.apache.calcite.rel.RelVisitor;
import org.apache.calcite.rel.type.RelDataTypeFactory;
import org.apache.calcite.rex.RexBuilder;
import org.apache.calcite.rex.RexNode;
Expand All @@ -57,7 +55,6 @@

import java.io.Closeable;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
Expand Down Expand Up @@ -92,7 +89,7 @@ public PlannerResult plan(final String sql)
return planWithDruidConvention(explain, root);
}
catch (RelOptPlanner.CannotPlanException e) {
// Try again with BINDABLE convention. Used for querying Values, metadata tables, and fallback.
// Try again with BINDABLE convention. Used for querying Values and metadata tables.
try {
return planWithBindableConvention(explain, root);
}
Expand Down Expand Up @@ -193,29 +190,8 @@ private PlannerResult planWithBindableConvention(
);
}

final Set<String> datasourceNames = new HashSet<>();
bindableRel.childrenAccept(
new RelVisitor()
{
@Override
public void visit(RelNode node, int ordinal, RelNode parent)
{
if (node instanceof DruidRel) {
datasourceNames.addAll(((DruidRel) node).getDataSourceNames());
}
if (node instanceof Bindables.BindableTableScan) {
Bindables.BindableTableScan bts = (Bindables.BindableTableScan) node;
RelOptTable table = bts.getTable();
String tableName = table.getQualifiedName().get(0);
datasourceNames.add(tableName);
}
node.childrenAccept(this);
}
}
);

if (explain != null) {
return planExplanation(bindableRel, explain, datasourceNames);
return planExplanation(bindableRel, explain, ImmutableSet.of());
} else {
final BindableRel theRel = bindableRel;
final DataContext dataContext = plannerContext.createDataContext((JavaTypeFactory) planner.getTypeFactory());
Expand Down Expand Up @@ -252,7 +228,7 @@ public void cleanup(EnumeratorIterator iterFromMake)
}
), () -> enumerator.close());
};
return new PlannerResult(resultsSupplier, root.validatedRowType, datasourceNames);
return new PlannerResult(resultsSupplier, root.validatedRowType, ImmutableSet.of());
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ public class PlannerConfig
{
public static final String CTX_KEY_USE_APPROXIMATE_COUNT_DISTINCT = "useApproximateCountDistinct";
public static final String CTX_KEY_USE_APPROXIMATE_TOPN = "useApproximateTopN";
public static final String CTX_KEY_USE_FALLBACK = "useFallback";

@JsonProperty
private Period metadataRefreshPeriod = new Period("PT1M");
Expand All @@ -51,9 +50,6 @@ public class PlannerConfig
@JsonProperty
private boolean useApproximateTopN = true;

@JsonProperty
private boolean useFallback = false;

@JsonProperty
private boolean requireTimeCondition = false;

Expand Down Expand Up @@ -111,11 +107,6 @@ public boolean isUseApproximateTopN()
return useApproximateTopN;
}

public boolean isUseFallback()
{
return useFallback;
}

public boolean isRequireTimeCondition()
{
return requireTimeCondition;
Expand Down Expand Up @@ -157,11 +148,6 @@ public PlannerConfig withOverrides(final Map<String, Object> context)
CTX_KEY_USE_APPROXIMATE_TOPN,
isUseApproximateTopN()
);
newConfig.useFallback = getContextBoolean(
context,
CTX_KEY_USE_FALLBACK,
isUseFallback()
);
newConfig.requireTimeCondition = isRequireTimeCondition();
newConfig.sqlTimeZone = getSqlTimeZone();
newConfig.awaitInitializationOnStart = isAwaitInitializationOnStart();
Expand Down Expand Up @@ -204,7 +190,6 @@ public boolean equals(final Object o)
maxQueryCount == that.maxQueryCount &&
useApproximateCountDistinct == that.useApproximateCountDistinct &&
useApproximateTopN == that.useApproximateTopN &&
useFallback == that.useFallback &&
requireTimeCondition == that.requireTimeCondition &&
awaitInitializationOnStart == that.awaitInitializationOnStart &&
metadataSegmentCacheEnable == that.metadataSegmentCacheEnable &&
Expand All @@ -225,7 +210,6 @@ public int hashCode()
maxQueryCount,
useApproximateCountDistinct,
useApproximateTopN,
useFallback,
requireTimeCondition,
awaitInitializationOnStart,
sqlTimeZone,
Expand All @@ -245,7 +229,6 @@ public String toString()
", maxQueryCount=" + maxQueryCount +
", useApproximateCountDistinct=" + useApproximateCountDistinct +
", useApproximateTopN=" + useApproximateTopN +
", useFallback=" + useFallback +
", requireTimeCondition=" + requireTimeCondition +
", awaitInitializationOnStart=" + awaitInitializationOnStart +
", metadataSegmentCacheEnable=" + metadataSegmentCacheEnable +
Expand Down
40 changes: 14 additions & 26 deletions sql/src/main/java/org/apache/druid/sql/calcite/planner/Rules.java
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@
import org.apache.calcite.tools.RelBuilder;
import org.apache.druid.sql.calcite.rel.QueryMaker;
import org.apache.druid.sql.calcite.rule.CaseFilteredAggregatorRule;
import org.apache.druid.sql.calcite.rule.DruidRelToBindableRule;
import org.apache.druid.sql.calcite.rule.DruidRelToDruidRule;
import org.apache.druid.sql.calcite.rule.DruidRules;
import org.apache.druid.sql.calcite.rule.DruidSemiJoinRule;
Expand Down Expand Up @@ -187,7 +186,7 @@ public static List<Program> programs(final PlannerContext plannerContext, final
);
return ImmutableList.of(
Programs.sequence(hepProgram, Programs.ofRules(druidConventionRuleSet(plannerContext, queryMaker))),
Programs.sequence(hepProgram, Programs.ofRules(bindableConventionRuleSet(plannerContext, queryMaker)))
Programs.sequence(hepProgram, Programs.ofRules(bindableConventionRuleSet(plannerContext)))
);
}

Expand All @@ -196,28 +195,29 @@ private static List<RelOptRule> druidConventionRuleSet(
final QueryMaker queryMaker
)
{
return ImmutableList.<RelOptRule>builder()
.addAll(baseRuleSet(plannerContext, queryMaker))
final ImmutableList.Builder<RelOptRule> retVal = ImmutableList.<RelOptRule>builder()
.addAll(baseRuleSet(plannerContext))
.add(DruidRelToDruidRule.instance())
.build();
.add(new DruidTableScanRule(queryMaker))
.addAll(DruidRules.rules());

if (plannerContext.getPlannerConfig().getMaxSemiJoinRowsInMemory() > 0) {
retVal.add(DruidSemiJoinRule.instance());
}

return retVal.build();
}

private static List<RelOptRule> bindableConventionRuleSet(
final PlannerContext plannerContext,
final QueryMaker queryMaker
)
private static List<RelOptRule> bindableConventionRuleSet(final PlannerContext plannerContext)
{
return ImmutableList.<RelOptRule>builder()
.addAll(baseRuleSet(plannerContext, queryMaker))
.addAll(baseRuleSet(plannerContext))
.addAll(Bindables.RULES)
.add(AggregateReduceFunctionsRule.INSTANCE)
.build();
}

private static List<RelOptRule> baseRuleSet(
final PlannerContext plannerContext,
final QueryMaker queryMaker
)
private static List<RelOptRule> baseRuleSet(final PlannerContext plannerContext)
{
final PlannerConfig plannerConfig = plannerContext.getPlannerConfig();
final ImmutableList.Builder<RelOptRule> rules = ImmutableList.builder();
Expand All @@ -236,22 +236,10 @@ private static List<RelOptRule> baseRuleSet(
rules.add(AggregateExpandDistinctAggregatesRule.JOIN);
}

if (plannerConfig.isUseFallback()) {
rules.add(DruidRelToBindableRule.instance());
}

rules.add(SortCollapseRule.instance());
rules.add(CaseFilteredAggregatorRule.instance());
rules.add(ProjectAggregatePruneUnusedCallRule.instance());

// Druid-specific rules.
rules.add(new DruidTableScanRule(queryMaker));
rules.addAll(DruidRules.rules());

if (plannerConfig.getMaxSemiJoinRowsInMemory() > 0) {
rules.add(DruidSemiJoinRule.instance());
}

return rules.build();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
import com.fasterxml.jackson.core.JsonProcessingException;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Iterables;
import org.apache.calcite.interpreter.BindableConvention;
import org.apache.calcite.plan.RelOptCluster;
import org.apache.calcite.plan.RelOptCost;
import org.apache.calcite.plan.RelOptPlanner;
Expand Down Expand Up @@ -154,18 +153,6 @@ public DruidQuery toDruidQueryForExplaining()
);
}

@Override
public DruidOuterQueryRel asBindable()
{
return new DruidOuterQueryRel(
getCluster(),
getTraitSet().plus(BindableConvention.INSTANCE),
sourceRel,
partialQuery,
getQueryMaker()
);
}

@Override
public DruidOuterQueryRel asDruidConvention()
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@

import com.fasterxml.jackson.core.JsonProcessingException;
import com.google.common.base.Preconditions;
import org.apache.calcite.interpreter.BindableConvention;
import org.apache.calcite.plan.Convention;
import org.apache.calcite.plan.RelOptCluster;
import org.apache.calcite.plan.RelOptCost;
Expand Down Expand Up @@ -110,19 +109,6 @@ public DruidQuery toDruidQueryForExplaining()
return toDruidQuery(false);
}

@Override
public DruidQueryRel asBindable()
{
return new DruidQueryRel(
getCluster(),
getTraitSet().plus(BindableConvention.INSTANCE),
table,
druidTable,
getQueryMaker(),
partialQuery
);
}

@Override
public DruidQueryRel asDruidConvention()
{
Expand Down
40 changes: 1 addition & 39 deletions sql/src/main/java/org/apache/druid/sql/calcite/rel/DruidRel.java
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,6 @@

package org.apache.druid.sql.calcite.rel;

import org.apache.calcite.DataContext;
import org.apache.calcite.interpreter.BindableRel;
import org.apache.calcite.interpreter.Node;
import org.apache.calcite.interpreter.Row;
import org.apache.calcite.interpreter.Sink;
import org.apache.calcite.linq4j.Enumerable;
import org.apache.calcite.plan.RelOptCluster;
import org.apache.calcite.plan.RelTraitSet;
import org.apache.calcite.rel.AbstractRelNode;
Expand All @@ -34,7 +28,7 @@
import javax.annotation.Nullable;
import java.util.List;

public abstract class DruidRel<T extends DruidRel> extends AbstractRelNode implements BindableRel
public abstract class DruidRel<T extends DruidRel> extends AbstractRelNode
{
private final QueryMaker queryMaker;

Expand Down Expand Up @@ -103,8 +97,6 @@ public boolean isValidDruidQuery()
*/
public abstract DruidQuery toDruidQueryForExplaining();

public abstract T asBindable();

public QueryMaker getQueryMaker()
{
return queryMaker;
Expand All @@ -121,34 +113,4 @@ public PlannerContext getPlannerContext()
* Get a list of names of datasources read by this DruidRel
*/
public abstract List<String> getDataSourceNames();

@Override
public Class<Object[]> getElementType()
{
return Object[].class;
}

@Override
public Node implement(InterpreterImplementor implementor)
{
final Sink sink = implementor.compiler.sink(this);
return () -> runQuery().accumulate(
sink,
(Sink theSink, Object[] in) -> {
try {
theSink.send(Row.of(in));
}
catch (InterruptedException e) {
throw new RuntimeException(e);
}
return theSink;
}
);
}

@Override
public Enumerable<Object[]> bind(final DataContext dataContext)
{
throw new UnsupportedOperationException();
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
import com.fasterxml.jackson.core.JsonProcessingException;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Iterables;
import org.apache.calcite.interpreter.BindableConvention;
import org.apache.calcite.plan.RelOptCluster;
import org.apache.calcite.plan.RelOptCost;
import org.apache.calcite.plan.RelOptPlanner;
Expand Down Expand Up @@ -156,21 +155,6 @@ public DruidQuery toDruidQueryForExplaining()
return left.toDruidQueryForExplaining();
}

@Override
public DruidSemiJoin asBindable()
{
return new DruidSemiJoin(
getCluster(),
getTraitSet().replace(BindableConvention.INSTANCE),
left,
RelOptRule.convert(right, BindableConvention.INSTANCE),
leftExpressions,
rightKeys,
maxSemiJoinRowsInMemory,
getQueryMaker()
);
}

@Override
public DruidSemiJoin asDruidConvention()
{
Expand Down
Loading