-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-13306] [SQL] uncorrelated scalar subquery #11190
Changes from all commits
0665a69
236ac88
016c36c
a4bae33
d0974cf
3a8f08d
7596173
0034172
e082845
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -80,6 +80,7 @@ class Analyzer( | |
ResolveGenerate :: | ||
ResolveFunctions :: | ||
ResolveAliases :: | ||
ResolveSubquery :: | ||
ResolveWindowOrder :: | ||
ResolveWindowFrame :: | ||
ResolveNaturalJoin :: | ||
|
@@ -120,7 +121,14 @@ class Analyzer( | |
withAlias.getOrElse(relation) | ||
} | ||
substituted.getOrElse(u) | ||
case other => | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. quick comment on why this isn't in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||
// This can't be done in ResolveSubquery because that does not know the CTE. | ||
other transformExpressions { | ||
case e: SubqueryExpression => | ||
e.withNewPlan(substituteCTE(e.query, cteRelations)) | ||
} | ||
} | ||
|
||
} | ||
} | ||
|
||
|
@@ -693,6 +701,30 @@ class Analyzer( | |
} | ||
} | ||
|
||
/** | ||
* This rule resolve subqueries inside expressions. | ||
* | ||
* Note: CTE are handled in CTESubstitution. | ||
*/ | ||
object ResolveSubquery extends Rule[LogicalPlan] with PredicateHelper { | ||
|
||
private def hasSubquery(e: Expression): Boolean = { | ||
e.find(_.isInstanceOf[SubqueryExpression]).isDefined | ||
} | ||
|
||
private def hasSubquery(q: LogicalPlan): Boolean = { | ||
q.expressions.exists(hasSubquery) | ||
} | ||
|
||
def apply(plan: LogicalPlan): LogicalPlan = plan resolveOperators { | ||
case q: LogicalPlan if q.childrenResolved && hasSubquery(q) => | ||
q transformExpressions { | ||
case e: SubqueryExpression if !e.query.resolved => | ||
e.withNewPlan(execute(e.query)) | ||
} | ||
} | ||
} | ||
|
||
/** | ||
* Turns projections that contain aggregate expressions into aggregations. | ||
*/ | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,82 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.sql.catalyst.expressions | ||
|
||
import org.apache.spark.sql.catalyst.analysis.TypeCheckResult | ||
import org.apache.spark.sql.catalyst.plans.QueryPlan | ||
import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Subquery} | ||
import org.apache.spark.sql.types.DataType | ||
|
||
/** | ||
* An interface for subquery that is used in expressions. | ||
*/ | ||
abstract class SubqueryExpression extends LeafExpression { | ||
|
||
/** | ||
* The logical plan of the query. | ||
*/ | ||
def query: LogicalPlan | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why is this needed? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is an helper function used in Analyzer and Optimizer, or we need to do type conversion. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is the base class for both logical plan and physical plan, kind of weird. This is to make the generateTreeString works in QueryPlan There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Analyzer and Optimizer only applies to logical plan right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes |
||
|
||
/** | ||
* Either a logical plan or a physical plan. The generated tree string (explain output) uses this | ||
* field to explain the subquery. | ||
*/ | ||
def plan: QueryPlan[_] | ||
|
||
/** | ||
* Updates the query with new logical plan. | ||
*/ | ||
def withNewPlan(plan: LogicalPlan): SubqueryExpression | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. scala doc There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can't this be just in the logical plan itself? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i think you can just remove this and move it into the logical subquery expression, since it's only used for logical plan anyway? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then should we have LogicalSubqueryExpression ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I meant ScalarSubquery. That's already the one isn't it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We will have ExistsSubquery, InSubquery shortly (or next release). |
||
} | ||
|
||
/** | ||
* A subquery that will return only one row and one column. | ||
* | ||
* This will be converted into [[execution.ScalarSubquery]] during physical planning. | ||
* | ||
* Note: `exprId` is used to have unique name in explain string output. | ||
*/ | ||
case class ScalarSubquery( | ||
query: LogicalPlan, | ||
exprId: ExprId = NamedExpression.newExprId) | ||
extends SubqueryExpression with Unevaluable { | ||
|
||
override def plan: LogicalPlan = Subquery(toString, query) | ||
|
||
override lazy val resolved: Boolean = query.resolved | ||
|
||
override def dataType: DataType = query.schema.fields.head.dataType | ||
|
||
override def checkInputDataTypes(): TypeCheckResult = { | ||
if (query.schema.length != 1) { | ||
TypeCheckResult.TypeCheckFailure("Scalar subquery must return only one column, but got " + | ||
query.schema.length.toString) | ||
} else { | ||
TypeCheckResult.TypeCheckSuccess | ||
} | ||
} | ||
|
||
override def foldable: Boolean = false | ||
override def nullable: Boolean = true | ||
|
||
override def withNewPlan(plan: LogicalPlan): ScalarSubquery = ScalarSubquery(plan, exprId) | ||
|
||
override def toString: String = s"subquery#${exprId.id}" | ||
|
||
// TODO: support sql() | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,6 +22,7 @@ import org.apache.spark.sql.catalyst.analysis._ | |
import org.apache.spark.sql.catalyst.expressions._ | ||
import org.apache.spark.sql.catalyst.plans.PlanTest | ||
import org.apache.spark.sql.catalyst.plans.logical._ | ||
import org.apache.spark.sql.types.BooleanType | ||
import org.apache.spark.unsafe.types.CalendarInterval | ||
|
||
class CatalystQlSuite extends PlanTest { | ||
|
@@ -201,4 +202,10 @@ class CatalystQlSuite extends PlanTest { | |
parser.parsePlan("select sum(product + 1) over (partition by (product + (1)) order by 2) " + | ||
"from windowData") | ||
} | ||
|
||
test("subquery") { | ||
parser.parsePlan("select (select max(b) from s) ss from t") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The only thing we are testing here is that things don't go really really wrong. I'd prefer it if we test the plan as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since plan checking is too easy to break, I added test for plan, finally remove them. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok that makes sense. |
||
parser.parsePlan("select * from t where a = (select b from s)") | ||
parser.parsePlan("select * from t group by g having a > (select b from s)") | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,6 +20,8 @@ package org.apache.spark.sql.execution | |
import java.util.concurrent.atomic.AtomicBoolean | ||
|
||
import scala.collection.mutable.ArrayBuffer | ||
import scala.concurrent.{Await, ExecutionContext, Future} | ||
import scala.concurrent.duration._ | ||
|
||
import org.apache.spark.Logging | ||
import org.apache.spark.rdd.{RDD, RDDOperationScope} | ||
|
@@ -31,6 +33,7 @@ import org.apache.spark.sql.catalyst.plans.QueryPlan | |
import org.apache.spark.sql.catalyst.plans.physical._ | ||
import org.apache.spark.sql.execution.metric.{LongSQLMetric, SQLMetric} | ||
import org.apache.spark.sql.types.DataType | ||
import org.apache.spark.util.ThreadUtils | ||
|
||
/** | ||
* The base class for physical operators. | ||
|
@@ -112,16 +115,58 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ | |
final def execute(): RDD[InternalRow] = { | ||
RDDOperationScope.withScope(sparkContext, nodeName, false, true) { | ||
prepare() | ||
waitForSubqueries() | ||
doExecute() | ||
} | ||
} | ||
|
||
// All the subqueries and their Future of results. | ||
@transient private val queryResults = ArrayBuffer[(ScalarSubquery, Future[Array[InternalRow]])]() | ||
|
||
/** | ||
* Collects all the subqueries and create a Future to take the first two rows of them. | ||
*/ | ||
protected def prepareSubqueries(): Unit = { | ||
val allSubqueries = expressions.flatMap(_.collect {case e: ScalarSubquery => e}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We could move this into There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It has a little bit difference than that, I'd like to duplicate it here. |
||
allSubqueries.asInstanceOf[Seq[ScalarSubquery]].foreach { e => | ||
val futureResult = Future { | ||
// We only need the first row, try to take two rows so we can throw an exception if there | ||
// are more than one rows returned. | ||
e.executedPlan.executeTake(2) | ||
}(SparkPlan.subqueryExecutionContext) | ||
queryResults += e -> futureResult | ||
} | ||
} | ||
|
||
/** | ||
* Waits for all the subqueries to finish and updates the results. | ||
*/ | ||
protected def waitForSubqueries(): Unit = { | ||
// fill in the result of subqueries | ||
queryResults.foreach { | ||
case (e, futureResult) => | ||
val rows = Await.result(futureResult, Duration.Inf) | ||
if (rows.length > 1) { | ||
sys.error(s"more than one row returned by a subquery used as an expression:\n${e.plan}") | ||
} | ||
if (rows.length == 1) { | ||
assert(rows(0).numFields == 1, "Analyzer should make sure this only returns one column") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The analyzer checks this right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nevermind. |
||
e.updateResult(rows(0).get(0, e.dataType)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why don't we replace the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The ScalarSubqueries could be class member of the current plan, the filed could be immutable, we could not replace it. |
||
} else { | ||
// There is no rows returned, the result should be null. | ||
e.updateResult(null) | ||
} | ||
} | ||
queryResults.clear() | ||
} | ||
|
||
/** | ||
* Prepare a SparkPlan for execution. It's idempotent. | ||
*/ | ||
final def prepare(): Unit = { | ||
if (prepareCalled.compareAndSet(false, true)) { | ||
doPrepare() | ||
prepareSubqueries() | ||
children.foreach(_.prepare()) | ||
} | ||
} | ||
|
@@ -231,6 +276,11 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ | |
} | ||
} | ||
|
||
object SparkPlan { | ||
private[execution] val subqueryExecutionContext = ExecutionContext.fromExecutorService( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What threadpool are broadcasts done on? Should it be the same? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This could be refactored later, use the same thread pool for all of them. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
ThreadUtils.newDaemonCachedThreadPool("subquery", 16)) | ||
} | ||
|
||
private[sql] trait LeafNode extends SparkPlan { | ||
override def children: Seq[SparkPlan] = Nil | ||
override def producedAttributes: AttributeSet = outputSet | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -73,9 +73,10 @@ trait CodegenSupport extends SparkPlan { | |
/** | ||
* Returns Java source code to process the rows from upstream. | ||
*/ | ||
def produce(ctx: CodegenContext, parent: CodegenSupport): String = { | ||
final def produce(ctx: CodegenContext, parent: CodegenSupport): String = { | ||
this.parent = parent | ||
ctx.freshNamePrefix = variablePrefix | ||
waitForSubqueries() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why is this needed? shouldn't SparkPlan.execute already call waitForSubqueries? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is needed for whole stage codegen, those operator will not call execute(). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ok got it. this is fairly hacky ... |
||
doProduce(ctx) | ||
} | ||
|
||
|
@@ -101,7 +102,7 @@ trait CodegenSupport extends SparkPlan { | |
/** | ||
* Consume the columns generated from current SparkPlan, call it's parent. | ||
*/ | ||
def consume(ctx: CodegenContext, input: Seq[ExprCode], row: String = null): String = { | ||
final def consume(ctx: CodegenContext, input: Seq[ExprCode], row: String = null): String = { | ||
if (input != null) { | ||
assert(input.length == output.length) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might sound excedingly dumb but I cannot find
ScalarSubquery
orSubqueryExpression
. Are they already in the code base? Or did you create branch on top of another branch?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind I just found the other PR...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed a file, sorry