Skip to content

Commit

Permalink
[SPARK-39819][SQL] DS V2 aggregate push down can work with Top N or P…
Browse files Browse the repository at this point in the history
…aging (Sort with expressions) (#525)

* [SPARK-39784][SQL] Put Literal values on the right side of the data source filter after translating Catalyst Expression to data source filter

Even though the literal value could be on both sides of the filter, e.g. both `a > 1` and `1 < a` are valid, after translating Catalyst Expression to data source filter, we want the literal value on the right side so it's easier for the data source to handle these filters. We do this kind of normalization for V1 Filter. We should have the same behavior for V2 Filter.

Before this PR, for the filters that have literal values on the right side, e.g. `1 > a`, we keep it as is. After this PR, we will normalize it to `a < 1` so the data source doesn't need to check each of the filters (and do the flip).

I think we should follow V1 Filter's behavior, normalize the filters during catalyst Expression to DS Filter translation time to make the literal values on the right side, so later on, data source doesn't need to check every single filter to figure out if it needs to flip the sides.

no

new test

Closes apache#37197 from huaxingao/flip.

Authored-by: huaxingao <huaxin_gao@apple.com>
Signed-off-by: huaxingao <huaxin_gao@apple.com>

* [SPARK-39836][SQL] Simplify V2ExpressionBuilder by extract common method

Currently, `V2ExpressionBuilder` have a lot of similar code, we can extract them as one common method.

We can simplify the implement with the common method.

Simplify `V2ExpressionBuilder` by extract common method.

'No'.
Just update inner implementation.

N/A

Closes apache#37249 from beliefer/SPARK-39836.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-39858][SQL] Remove unnecessary `AliasHelper` or `PredicateHelper` for some rules

When I using `AliasHelper`, I find that some rules inherit it instead of using it.

This PR removes unnecessary `AliasHelper` or `PredicateHelper` in the following cases:
- The rule inherit `AliasHelper` instead of using it. In this case, we can remove `AliasHelper` directly.
- The rule inherit `PredicateHelper` instead of using it. In this case, we can remove `PredicateHelper` directly.
- The rule inherit `AliasHelper` and `PredicateHelper`. In fact, `PredicateHelper` already extends `AliasHelper`. In this case, we can remove `AliasHelper`.
- The rule inherit `OperationHelper` and `PredicateHelper`. In fact, `OperationHelper` already extends `PredicateHelper`. In this case, we can remove `PredicateHelper`.
- The rule inherit `PlanTest` and `PredicateHelper`. In fact, `PlanTest` already extends `PredicateHelper`.  In this case, we can remove `PredicateHelper`.
- The rule inherit `QueryTest` and `PredicateHelper`. In fact, `QueryTest` already extends `PredicateHelper`.  In this case, we can remove `PredicateHelper`.

Remove unnecessary `AliasHelper` or `PredicateHelper` for some rules

'No'.
Just improve the inner implementation.

N/A

Closes apache#37272 from beliefer/SPARK-39858.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-39784][SQL][FOLLOW-UP] Use BinaryComparison instead of Predicate (if) for type check

follow up this [comment](apache#37197 (comment))

code simplification

No

Existing test

Closes apache#37278 from huaxingao/followup.

Authored-by: huaxingao <huaxin_gao@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>

* [SPARK-39909] Organize the check of push down information for JDBCV2Suite

This PR changes the check method from `check(one_large_string)` to `check(small_string1, small_string2, ...)`

It can help us check the results individually and make the code more clearer.

no

existing tests

Closes apache#37342 from yabola/fix.

Authored-by: chenliang.lu <marssss2929@gmail.com>
Signed-off-by: huaxingao <huaxin_gao@apple.com>

* [SPARK-39961][SQL] DS V2 push-down translate Cast if the cast is safe

Currently, DS V2 push-down translate `Cast` only if the ansi mode is true.
In fact, if the cast is safe(e.g. cast number to string, cast int to long), we can translate it too.

This PR will call `Cast.canUpCast` so as we can translate `Cast` to V2 `Cast` safely.

Note: The rule `SimplifyCasts` optimize some safe cast, e.g. cast int to long, so we may not see the `Cast`.

Add the range for DS V2 push down `Cast`.

'Yes'.
`Cast` could be pushed down to data source in more cases.

Test cases updated.

Closes apache#37388 from beliefer/SPARK-39961.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>

* [SPARK-38901][SQL] DS V2 supports push down misc functions

Currently, Spark have some misc functions. Please refer
https://github.com/apache/spark/blob/2f8613f22c0750c00cf1dcfb2f31c431d8dc1be7/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala#L688

These functions show below:
`AES_ENCRYPT,`
`AES_DECRYPT`,
`SHA1`,
`SHA2`,
`MD5`,
`CRC32`

Function|PostgreSQL|ClickHouse|H2|MySQL|Oracle|Redshift|Snowflake|DB2|Vertica|Exasol|SqlServer|Yellowbrick|Mariadb|Singlestore|
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
`AesEncrypt`|Yes|Yes|Yes|Yes|Yes|NO|Yes|Yes|NO|NO|NO|Yes|Yes|Yes|
`AesDecrypt`|Yes|Yes|Yes|Yes|Yes|NO|Yes|Yes|NO|NO|NO|Yes|Yes|Yes|
`Sha1`|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
`Sha2`|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
`Md5`|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
`Crc32`|No|Yes|No|Yes|NO|Yes|NO|Yes|NO|NO|NO|NO|NO|Yes|

DS V2 should supports push down these misc functions.

DS V2 supports push down misc functions.

'No'.
New feature.

New tests.

Closes apache#37169 from chenzhx/misc.

Authored-by: chenzhx <chen@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-39964][SQL] DS V2 pushdown should unify the translate path

Currently, DS V2 pushdown have two translate path `DataSourceStrategy.translateAggregate` used to translate aggregate functions and `V2ExpressionBuilder` used to translate other functions and expressions, we can unify them.

After this PR, the translate have only one code path, developers will easy to coding and reading.

Unify the translate path for DS V2 pushdown.

'No'.
Just update the inner implementation.

N/A

Closes apache#37391 from beliefer/SPARK-39964.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-39819][SQL] DS V2 aggregate push down can work with Top N or Paging (Sort with expressions)

Currently, DS V2 aggregate push-down cannot work with DS V2 Top N push-down (`ORDER BY col LIMIT m`) or DS V2 Paging push-down (`ORDER BY col LIMIT m OFFSET n`).
If we can push down aggregate with Top N or Paging, it will be better performance.

This PR only let aggregate pushed down with ORDER BY expressions which must be GROUP BY expressions.

The idea of this PR are:
1. When we give an expectation outputs of `ScanBuilderHolder`, holding the map from expectation outputs to origin expressions (contains origin columns).
2. When we try to push down Top N or Paging, we need restore the origin expressions for `SortOrder`.

Let DS V2 aggregate push down can work with Top N or Paging (Sort with group expressions), then users can get the better performance.

'No'.
New feature.

New test cases.

Closes apache#37320 from beliefer/SPARK-39819_new.

Authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-39929][SQL] DS V2 supports push down string  functions(non ANSI)

**What changes were proposed in this pull request?**

support more  commonly used string functions

BIT_LENGTH
CHAR_LENGTH
CONCAT

The mainstream databases support these functions show below.

Function | PostgreSQL | ClickHouse | H2 | MySQL | Oracle | Redshift | Presto | Teradata | Snowflake | DB2 | Vertica | Exasol | SqlServer | Yellowbrick | Impala | Mariadb | Druid | Pig | SQLite | Influxdata | Singlestore | ElasticSearch
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
BIT_LENGTH | Yes | Yes | Yes | Yes | Yes | no | no | no | no | Yes | Yes | Yes | no | Yes | no | Yes | no | no | no | no | no | Yes
CHAR_LENGTH | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | no | Yes | Yes | Yes | Yes
CONCAT | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | no | no | no | Yes | Yes

**Why are the changes needed?**
DS V2 supports push down string functions

**Does this PR introduce any user-facing change?**
'No'.
New feature.

How was this patch tested?
New tests.

Closes apache#37427 from zheniantoushipashi/SPARK-39929.

Authored-by: biaobiao.sun <1319027852@qq.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-38899][SQL][FOLLOWUP] Fix bug extract datetime in DS V2 pushdown

[SPARK-38899](apache#36663) supports extract function in JDBC data source.
But the implement is incorrect.
This PR just add a test case and it will be failed !
The test case show below.
```
test("scan with filter push-down with date time functions")  {
    val df9 = sql("SELECT name FROM h2.test.datetime WHERE " +
      "dayofyear(date1) > 100 order by dayofyear(date1) limit 1")
    checkFiltersRemoved(df9)
    val expectedPlanFragment9 =
      "PushedFilters: [DATE1 IS NOT NULL, EXTRACT(DAY_OF_YEAR FROM DATE1) > 100], " +
      "PushedTopN: ORDER BY [EXTRACT(DAY_OF_YEAR FROM DATE1) ASC NULLS FIRST] LIMIT 1,"
    checkPushedInfo(df9, expectedPlanFragment9)
    checkAnswer(df9, Seq(Row("alex")))
  }
```

The test case output failure show below.
```
"== Parsed Logical Plan ==
'GlobalLimit 1
+- 'LocalLimit 1
   +- 'Sort ['dayofyear('date1) ASC NULLS FIRST], true
      +- 'Project ['name]
         +- 'Filter ('dayofyear('date1) > 100)
            +- 'UnresolvedRelation [h2, test, datetime], [], false

== Analyzed Logical Plan ==
name: string
GlobalLimit 1
+- LocalLimit 1
   +- Project [name#x]
      +- Sort [dayofyear(date1#x) ASC NULLS FIRST], true
         +- Project [name#x, date1#x]
            +- Filter (dayofyear(date1#x) > 100)
               +- SubqueryAlias h2.test.datetime
                  +- RelationV2[NAME#x, DATE1#x, TIME1#x] h2.test.datetime test.datetime

== Optimized Logical Plan ==
Project [name#x]
+- RelationV2[NAME#x] test.datetime

== Physical Plan ==
*(1) Scan org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCScan$$anon$145f6181a [NAME#x] PushedFilters: [DATE1 IS NOT NULL, EXTRACT(DAY_OF_YEAR FROM DATE1) > 100], PushedTopN: ORDER BY [org.apache.spark.sql.connector.expressions.Extract3b95fce9 ASC NULLS FIRST] LIMIT 1, ReadSchema: struct<NAME:string>

" did not contain "PushedFilters: [DATE1 IS NOT NULL, EXTRACT(DAY_OF_YEAR FROM DATE1) > 100], PushedTopN: ORDER BY [EXTRACT(DAY_OF_YEAR FROM DATE1) ASC NULLS FIRST] LIMIT 1,"
```

Fix an implement bug.
The reason of the bug is the Extract the function does not implement the toString method when pushing down the JDBC data source.

'No'.
New feature.

New test case.

Closes apache#37469 from chenzhx/spark-master.

Authored-by: chenzhx <chen@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* code update

Signed-off-by: huaxingao <huaxin_gao@apple.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Co-authored-by: huaxingao <huaxin_gao@apple.com>
Co-authored-by: Jiaan Geng <beliefer@163.com>
Co-authored-by: chenliang.lu <marssss2929@gmail.com>
Co-authored-by: biaobiao.sun <1319027852@qq.com>
  • Loading branch information
5 people authored and leejaywei committed Oct 18, 2022
1 parent e4197b6 commit 04f5c40
Show file tree
Hide file tree
Showing 31 changed files with 847 additions and 476 deletions.
Expand Up @@ -42,4 +42,9 @@ public Cast(Expression expression, DataType dataType) {

@Override
public Expression[] children() { return new Expression[]{ expression() }; }

@Override
public String toString() {
return "CAST(" + expression.describe() + " AS " + dataType.typeName() + ")";
}
}
Expand Up @@ -18,6 +18,7 @@
package org.apache.spark.sql.connector.expressions;

import org.apache.spark.annotation.Evolving;
import org.apache.spark.sql.internal.connector.ToStringSQLBuilder;

import java.io.Serializable;

Expand Down Expand Up @@ -59,4 +60,10 @@ public Extract(String field, Expression source) {

@Override
public Expression[] children() { return new Expression[]{ source() }; }

@Override
public String toString() {
ToStringSQLBuilder builder = new ToStringSQLBuilder();
return builder.build(this);
}
}
Expand Up @@ -340,6 +340,24 @@
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>BIT_LENGTH</code>
* <ul>
* <li>SQL semantic: <code>BIT_LENGTH(src)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>CHAR_LENGTH</code>
* <ul>
* <li>SQL semantic: <code>CHAR_LENGTH(src)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>CONCAT</code>
* <ul>
* <li>SQL semantic: <code>CONCAT(col1, col2, ..., colN)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>OVERLAY</code>
* <ul>
* <li>SQL semantic: <code>OVERLAY(string, replace, position[, length])</code></li>
Expand All @@ -364,6 +382,42 @@
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>AES_ENCRYPT</code>
* <ul>
* <li>SQL semantic: <code>AES_ENCRYPT(expr, key[, mode[, padding]])</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>AES_DECRYPT</code>
* <ul>
* <li>SQL semantic: <code>AES_DECRYPT(expr, key[, mode[, padding]])</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>SHA1</code>
* <ul>
* <li>SQL semantic: <code>SHA1(expr)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>SHA2</code>
* <ul>
* <li>SQL semantic: <code>SHA2(expr, bitLength)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>MD5</code>
* <ul>
* <li>SQL semantic: <code>MD5(expr)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* <li>Name: <code>CRC32</code>
* <ul>
* <li>SQL semantic: <code>CRC32(expr)</code></li>
* <li>Since version: 3.4.0</li>
* </ul>
* </li>
* </ol>
* Note: SQL semantic conforms ANSI standard, so some expressions are not supported when ANSI off,
* including: add, subtract, multiply, divide, remainder, pmod.
Expand Down
Expand Up @@ -149,6 +149,15 @@ public String build(Expression expr) {
case "DATE_ADD":
case "DATE_DIFF":
case "TRUNC":
case "AES_ENCRYPT":
case "AES_DECRYPT":
case "SHA1":
case "SHA2":
case "MD5":
case "CRC32":
case "BIT_LENGTH":
case "CHAR_LENGTH":
case "CONCAT":
return visitSQLFunction(name,
Arrays.stream(e.children()).map(c -> build(c)).toArray(String[]::new));
case "CASE_WHEN": {
Expand Down
Expand Up @@ -2394,7 +2394,7 @@ class Analyzer(override val catalogManager: CatalogManager)
*
* Note: CTEs are handled in CTESubstitution.
*/
object ResolveSubquery extends Rule[LogicalPlan] with PredicateHelper {
object ResolveSubquery extends Rule[LogicalPlan] {
/**
* Resolve the correlated expressions in a subquery, as if the expressions live in the outer
* plan. All resolved outer references are wrapped in an [[OuterReference]]
Expand Down Expand Up @@ -2563,7 +2563,7 @@ class Analyzer(override val catalogManager: CatalogManager)
* those in a HAVING clause or ORDER BY clause. These expressions are pushed down to the
* underlying aggregate operator and then projected away after the original operator.
*/
object ResolveAggregateFunctions extends Rule[LogicalPlan] with AliasHelper {
object ResolveAggregateFunctions extends Rule[LogicalPlan] {
def apply(plan: LogicalPlan): LogicalPlan = plan.resolveOperatorsUpWithPruning(
_.containsPattern(AGGREGATE), ruleId) {
// Resolve aggregate with having clause to Filter(..., Aggregate()). Note, to avoid wrongly
Expand Down
Expand Up @@ -401,7 +401,7 @@ case class Cost(card: BigInt, size: BigInt) {
*
* Filters (2) and (3) are not implemented.
*/
object JoinReorderDPFilters extends PredicateHelper {
object JoinReorderDPFilters {
/**
* Builds join graph information to be used by the filtering strategies.
* Currently, it builds the sets of star/non-star joins.
Expand Down
Expand Up @@ -751,7 +751,7 @@ object LimitPushDown extends Rule[LogicalPlan] {
* safe to pushdown Filters and Projections through it. Filter pushdown is handled by another
* rule PushDownPredicates. Once we add UNION DISTINCT, we will not be able to pushdown Projections.
*/
object PushProjectionThroughUnion extends Rule[LogicalPlan] with PredicateHelper {
object PushProjectionThroughUnion extends Rule[LogicalPlan] {

/**
* Maps Attributes from the left side to the corresponding Attribute on the right side.
Expand Down Expand Up @@ -1525,7 +1525,7 @@ object PruneFilters extends Rule[LogicalPlan] with PredicateHelper {
* This rule improves performance of predicate pushdown for cascading joins such as:
* Filter-Join-Join-Join. Most predicates can be pushed down in a single pass.
*/
object PushDownPredicates extends Rule[LogicalPlan] with PredicateHelper {
object PushDownPredicates extends Rule[LogicalPlan] {
def apply(plan: LogicalPlan): LogicalPlan = plan.transformWithPruning(
_.containsAnyPattern(FILTER, JOIN)) {
CombineFilters.applyLocally
Expand Down
Expand Up @@ -109,7 +109,7 @@ object ConstantFolding extends Rule[LogicalPlan] {
* - Using this mapping, replace occurrence of the attributes with the corresponding constant values
* in the AND node.
*/
object ConstantPropagation extends Rule[LogicalPlan] with PredicateHelper {
object ConstantPropagation extends Rule[LogicalPlan] {
def apply(plan: LogicalPlan): LogicalPlan = plan.transformUpWithPruning(
_.containsAllPatterns(LITERAL, FILTER), ruleId) {
case f: Filter =>
Expand Down Expand Up @@ -532,7 +532,7 @@ object SimplifyBinaryComparison
/**
* Simplifies conditional expressions (if / case).
*/
object SimplifyConditionals extends Rule[LogicalPlan] with PredicateHelper {
object SimplifyConditionals extends Rule[LogicalPlan] {
private def falseOrNullLiteral(e: Expression): Boolean = e match {
case FalseLiteral => true
case Literal(null, _) => true
Expand Down Expand Up @@ -617,7 +617,7 @@ object SimplifyConditionals extends Rule[LogicalPlan] with PredicateHelper {
/**
* Push the foldable expression into (if / case) branches.
*/
object PushFoldableIntoBranches extends Rule[LogicalPlan] with PredicateHelper {
object PushFoldableIntoBranches extends Rule[LogicalPlan] {

// To be conservative here: it's only a guaranteed win if all but at most only one branch
// end up being not foldable.
Expand Down
Expand Up @@ -29,7 +29,7 @@ import org.apache.spark.sql.errors.QueryCompilationErrors
import org.apache.spark.sql.execution.datasources.v2.{DataSourceV2Relation, DataSourceV2ScanRelation}
import org.apache.spark.sql.internal.SQLConf

trait OperationHelper extends AliasHelper with PredicateHelper {
trait OperationHelper extends PredicateHelper {
import org.apache.spark.sql.catalyst.optimizer.CollapseProject.canCollapseExpressions

type ReturnType =
Expand Down Expand Up @@ -119,7 +119,7 @@ trait OperationHelper extends AliasHelper with PredicateHelper {
* [[org.apache.spark.sql.catalyst.expressions.Alias Aliases]] are in-lined/substituted if
* necessary.
*/
object PhysicalOperation extends OperationHelper with PredicateHelper {
object PhysicalOperation extends OperationHelper {
override protected def legacyMode: Boolean = true
}

Expand All @@ -128,7 +128,7 @@ object PhysicalOperation extends OperationHelper with PredicateHelper {
* operations even if they are non-deterministic, as long as they satisfy the
* requirement of CollapseProject and CombineFilters.
*/
object ScanOperation extends OperationHelper with PredicateHelper {
object ScanOperation extends OperationHelper {
override protected def legacyMode: Boolean = false
}

Expand Down
Expand Up @@ -22,10 +22,7 @@ import org.apache.spark.sql.catalyst.dsl.expressions._
import org.apache.spark.sql.catalyst.plans.PlanTest
import org.apache.spark.sql.types.BooleanType

class ExtractPredicatesWithinOutputSetSuite
extends SparkFunSuite
with PredicateHelper
with PlanTest {
class ExtractPredicatesWithinOutputSetSuite extends SparkFunSuite with PlanTest {
private val a = AttributeReference("A", BooleanType)(exprId = ExprId(1))
private val b = AttributeReference("B", BooleanType)(exprId = ExprId(2))
private val c = AttributeReference("C", BooleanType)(exprId = ExprId(3))
Expand Down
Expand Up @@ -28,7 +28,7 @@ import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types.{IntegerType, StructField, StructType}

class BinaryComparisonSimplificationSuite extends PlanTest with PredicateHelper {
class BinaryComparisonSimplificationSuite extends PlanTest {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches =
Expand Down
Expand Up @@ -28,7 +28,7 @@ import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.types.BooleanType

class BooleanSimplificationSuite extends PlanTest with ExpressionEvalHelper with PredicateHelper {
class BooleanSimplificationSuite extends PlanTest with ExpressionEvalHelper {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches =
Expand Down
Expand Up @@ -27,7 +27,7 @@ import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.rules._


class EliminateSubqueryAliasesSuite extends PlanTest with PredicateHelper {
class EliminateSubqueryAliasesSuite extends PlanTest {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches = Batch("EliminateSubqueryAliases", Once, EliminateSubqueryAliases) :: Nil
Expand Down
Expand Up @@ -32,8 +32,7 @@ import org.apache.spark.sql.types.{BooleanType, IntegerType, StringType, Timesta
import org.apache.spark.unsafe.types.CalendarInterval


class PushFoldableIntoBranchesSuite
extends PlanTest with ExpressionEvalHelper with PredicateHelper {
class PushFoldableIntoBranchesSuite extends PlanTest with ExpressionEvalHelper {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches = Batch("PushFoldableIntoBranches", FixedPoint(50),
Expand Down
Expand Up @@ -25,7 +25,7 @@ import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.types.MetadataBuilder

class RemoveRedundantAliasAndProjectSuite extends PlanTest with PredicateHelper {
class RemoveRedundantAliasAndProjectSuite extends PlanTest {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches = Batch(
Expand Down
Expand Up @@ -28,7 +28,7 @@ import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.types.{BooleanType, IntegerType}


class SimplifyConditionalSuite extends PlanTest with ExpressionEvalHelper with PredicateHelper {
class SimplifyConditionalSuite extends PlanTest with ExpressionEvalHelper {

object Optimize extends RuleExecutor[LogicalPlan] {
val batches = Batch("SimplifyConditionals", FixedPoint(50),
Expand Down

0 comments on commit 04f5c40

Please sign in to comment.