Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
0fecbd9
[SPARK-42746][SQL] Add the LIST_AGG() aggregate function
Hisoka-X Aug 5, 2023
99cf932
update
Hisoka-X Aug 9, 2023
db513cf
update
Hisoka-X Aug 9, 2023
68ed739
format
Hisoka-X Aug 9, 2023
c56f291
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Aug 9, 2023
d8460c8
format
Hisoka-X Aug 9, 2023
864f658
fix review
Hisoka-X Aug 10, 2023
1de7ffe
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Aug 13, 2023
274a96b
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Aug 22, 2023
d23654c
format
Hisoka-X Aug 22, 2023
8347939
Merge branch 'master_' into SPARK-42746_listagg_function
Hisoka-X Aug 23, 2023
412b8e7
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Sep 3, 2023
90b2f2a
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Sep 4, 2023
20b45dc
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Sep 5, 2023
7c912d5
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Sep 19, 2023
75b234a
Merge branch 'master_' into SPARK-42746_listagg_function
Hisoka-X Oct 9, 2023
f048dc9
update
Hisoka-X Oct 9, 2023
77825a7
update
Hisoka-X Oct 9, 2023
ce093b5
update
Hisoka-X Oct 11, 2023
f2a1fb7
Merge branch 'master_' into SPARK-42746_listagg_function
Hisoka-X Oct 23, 2023
19fdafc
update
Hisoka-X Oct 23, 2023
9ae3872
update
Hisoka-X Oct 23, 2023
8e3aae3
update
Hisoka-X Oct 23, 2023
8a3f705
Merge branch 'master_' into SPARK-42746_listagg_function
Hisoka-X Oct 23, 2023
c0f1496
update
Hisoka-X Oct 23, 2023
2fcd373
update
Hisoka-X Oct 23, 2023
bd088a4
Update sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/S…
Hisoka-X Oct 24, 2023
269aca3
Merge branch 'master' into SPARK-42746_listagg_function
Hisoka-X Oct 24, 2023
dd0bfaf
update
Hisoka-X Oct 24, 2023
433e49b
Merge remote-tracking branch 'origin/SPARK-42746_listagg_function' in…
Hisoka-X Oct 24, 2023
6f54ab0
update
Hisoka-X Oct 24, 2023
b0dc017
update
Hisoka-X Oct 24, 2023
87999f4
update
Hisoka-X Oct 24, 2023
a0f0e5d
update
Hisoka-X Oct 24, 2023
8ba2466
update
Hisoka-X Oct 25, 2023
ed46b31
Merge branch 'master_' into SPARK-42746_listagg_function
Hisoka-X Nov 27, 2023
885e812
update
Hisoka-X Nov 27, 2023
8ba8566
Merge branch 'refs/heads/master' into SPARK-42746-add-listagg
mikhailnik-db Oct 30, 2024
9e70cc5
[SPARK-42746] upgrade the old branch after merge
mikhailnik-db Oct 31, 2024
3382056
[SPARK-42746] add binary type support, type validation and set defaul…
mikhailnik-db Oct 31, 2024
0f64921
[SPARK-42746] add more validation errors
mikhailnik-db Nov 1, 2024
d69ad1f
[SPARK-42746] set default delimiter to null
mikhailnik-db Nov 1, 2024
6f74b67
[SPARK-42746] add multi expression ordering support
mikhailnik-db Nov 1, 2024
5050630
[SPARK-42746] add scala functions
mikhailnik-db Nov 4, 2024
b75855d
[SPARK-42746] add string_agg alias
mikhailnik-db Nov 4, 2024
3f2e296
Merge branch 'refs/heads/master' into SPARK-42746-add-listagg
mikhailnik-db Nov 4, 2024
14ee65e
[SPARK-42746] return licence to SupportsOrderingWithinGroup
mikhailnik-db Nov 4, 2024
e638d29
[SPARK-42746] add collation tests
mikhailnik-db Nov 4, 2024
dfcc112
[SPARK-42746] add listagg to excludedDataFrameFunctions
mikhailnik-db Nov 4, 2024
3cbe9e9
[SPARK-42746] add string_agg to expected_missing_in_py
mikhailnik-db Nov 4, 2024
7105c7c
[SPARK-42746] add follow-up ticket
mikhailnik-db Nov 4, 2024
27cbd03
[SPARK-42746] fix formating
mikhailnik-db Nov 5, 2024
d514787
[SPARK-42746] remove functions with columnName
mikhailnik-db Nov 5, 2024
5fd9a30
[SPARK-42746] reformat file
mikhailnik-db Nov 5, 2024
516567a
[SPARK-42746] listagg with columnName from tests
mikhailnik-db Nov 5, 2024
27f445d
[SPARK-42746] fix java style
mikhailnik-db Nov 5, 2024
fc722df
[SPARK-42746] improve doc and errors
mikhailnik-db Nov 5, 2024
e3b1a26
[SPARK-42746] add golden files for listagg
mikhailnik-db Nov 14, 2024
ad49fcf
[SPARK-42746] remove InverseDistributionFunction
mikhailnik-db Nov 14, 2024
9745a83
Merge branch 'refs/heads/master' into SPARK-42746-add-listagg
mikhailnik-db Nov 15, 2024
0aca46c
[SPARK-42746] fix golden file and small refactoring
mikhailnik-db Nov 15, 2024
ca5b13a
[SPARK-42746] fix ThriftServerQueryTestSuite
mikhailnik-db Nov 19, 2024
5f37ae3
Merge branch 'refs/heads/master' into SPARK-42746-add-listagg
mikhailnik-db Nov 19, 2024
056ec61
[SPARK-42746] fix after merge
mikhailnik-db Nov 19, 2024
cb5ad3e
[SPARK-42746] add comments to sortBuffer
mikhailnik-db Nov 21, 2024
07dfd82
[SPARK-42746] return SupportsOrderingWithinGroup check
mikhailnik-db Nov 21, 2024
be68e20
[SPARK-42746] remove test duplicates
mikhailnik-db Nov 21, 2024
6a9c1fe
[SPARK-42746] move functionAndOrderExpressionMismatchError to CheckAn…
mikhailnik-db Nov 21, 2024
0efedf3
[SPARK-42746] FUNCTION_AND_ORDER_EXPRESSION_MISMATCH -> INVALID_WITHI…
mikhailnik-db Nov 22, 2024
811c36c
[SPARK-42746] add trim collation tests
mikhailnik-db Nov 22, 2024
9c5bd3d
[SPARK-42746] adjust error message
mikhailnik-db Nov 22, 2024
e6d9c70
[SPARK-42746] make SortOrder a child of listagg
mikhailnik-db Nov 25, 2024
0bbd8af
[SPARK-42746] fix error-conditions
mikhailnik-db Nov 25, 2024
d96ac1e
[SPARK-42746] deduplicate concat logic
mikhailnik-db Nov 26, 2024
aee0ac5
[SPARK-42746] add type safety in getDelimiterValue
mikhailnik-db Nov 26, 2024
91b759f
[SPARK-42746] fix java indent
mikhailnik-db Nov 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -135,27 +135,57 @@ public static byte[] subStringSQL(byte[] bytes, int pos, int len) {
return Arrays.copyOfRange(bytes, start, end);
}

/**
* Concatenate multiple byte arrays into one.
* If one of the inputs is null then null will be returned.
*
* @param inputs byte arrays to concatenate
* @return the concatenated byte array or null if one of the arguments is null
*/
public static byte[] concat(byte[]... inputs) {
return concatWS(EMPTY_BYTE, inputs);
}

/**
* Concatenate multiple byte arrays with a given delimiter.
* If the delimiter or one of the inputs is null then null will be returned.
*
* @param delimiter byte array to be placed between each input
* @param inputs byte arrays to concatenate
* @return the concatenated byte array or null if one of the arguments is null
*/
public static byte[] concatWS(byte[] delimiter, byte[]... inputs) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please add a comment saying what this function is doing?

if (delimiter == null) {
return null;
}
// Compute the total length of the result
long totalLength = 0;
for (byte[] input : inputs) {
if (input != null) {
totalLength += input.length;
totalLength += input.length + delimiter.length;
} else {
return null;
}
}

if (totalLength > 0) totalLength -= delimiter.length;
// Allocate a new byte array, and copy the inputs one by one into it
final byte[] result = new byte[Ints.checkedCast(totalLength)];
int offset = 0;
for (byte[] input : inputs) {
for (int i = 0; i < inputs.length; i++) {
byte[] input = inputs[i];
int len = input.length;
Platform.copyMemory(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems copied from L154 above, please dedup into one place?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to accidentally change existing behavior or performance so I thought a little copy-paste was justified in this isolated code. But I probably concern too much)
Removed

input, Platform.BYTE_ARRAY_OFFSET,
result, Platform.BYTE_ARRAY_OFFSET + offset,
len);
offset += len;
if (delimiter.length > 0 && i < inputs.length - 1) {
Platform.copyMemory(
delimiter, Platform.BYTE_ARRAY_OFFSET,
result, Platform.BYTE_ARRAY_OFFSET + offset,
delimiter.length);
offset += delimiter.length;
}
}
return result;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,59 @@ public void testCompareBinary() {
byte[] y4 = new byte[]{(byte) 100, (byte) 200};
Assertions.assertEquals(0, ByteArray.compareBinary(x4, y4));
}

@Test
public void testConcat() {
byte[] x1 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y1 = new byte[]{(byte) 4, (byte) 5, (byte) 6};
byte[] result1 = ByteArray.concat(x1, y1);
byte[] expected1 = new byte[]{(byte) 1, (byte) 2, (byte) 3, (byte) 4, (byte) 5, (byte) 6};
Assertions.assertArrayEquals(expected1, result1);

byte[] x2 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y2 = new byte[0];
byte[] result2 = ByteArray.concat(x2, y2);
byte[] expected2 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
Assertions.assertArrayEquals(expected2, result2);

byte[] x3 = new byte[0];
byte[] y3 = new byte[]{(byte) 4, (byte) 5, (byte) 6};
byte[] result3 = ByteArray.concat(x3, y3);
byte[] expected3 = new byte[]{(byte) 4, (byte) 5, (byte) 6};
Assertions.assertArrayEquals(expected3, result3);

byte[] x4 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y4 = null;
byte[] result4 = ByteArray.concat(x4, y4);
Assertions.assertArrayEquals(null, result4);
}

@Test
public void testConcatWS() {
byte[] separator = new byte[]{(byte) 42};

byte[] x1 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y1 = new byte[]{(byte) 4, (byte) 5, (byte) 6};
byte[] result1 = ByteArray.concatWS(separator, x1, y1);
byte[] expected1 = new byte[]{(byte) 1, (byte) 2, (byte) 3, (byte) 42,
(byte) 4, (byte) 5, (byte) 6};
Assertions.assertArrayEquals(expected1, result1);

byte[] x2 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y2 = new byte[0];
byte[] result2 = ByteArray.concatWS(separator, x2, y2);
byte[] expected2 = new byte[]{(byte) 1, (byte) 2, (byte) 3, (byte) 42};
Assertions.assertArrayEquals(expected2, result2);

byte[] x3 = new byte[0];
byte[] y3 = new byte[]{(byte) 4, (byte) 5, (byte) 6};
byte[] result3 = ByteArray.concatWS(separator, x3, y3);
byte[] expected3 = new byte[]{(byte) 42, (byte) 4, (byte) 5, (byte) 6};
Assertions.assertArrayEquals(expected3, result3);

byte[] x4 = new byte[]{(byte) 1, (byte) 2, (byte) 3};
byte[] y4 = null;
byte[] result4 = ByteArray.concatWS(separator, x4, y4);
Assertions.assertArrayEquals(null, result4);
}
}
51 changes: 28 additions & 23 deletions common/utils/src/main/resources/error/error-conditions.json
Original file line number Diff line number Diff line change
Expand Up @@ -2627,29 +2627,6 @@
],
"sqlState" : "22006"
},
"INVALID_INVERSE_DISTRIBUTION_FUNCTION" : {
"message" : [
"Invalid inverse distribution function <funcName>."
],
"subClass" : {
"DISTINCT_UNSUPPORTED" : {
"message" : [
"Cannot use DISTINCT with WITHIN GROUP."
]
},
"WITHIN_GROUP_MISSING" : {
"message" : [
"WITHIN GROUP is required for inverse distribution function."
]
},
"WRONG_NUM_ORDERINGS" : {
"message" : [
"Requires <expectedNum> orderings in WITHIN GROUP but got <actualNum>."
]
}
},
"sqlState" : "42K0K"
},
"INVALID_JAVA_IDENTIFIER_AS_FIELD_NAME" : {
"message" : [
"<fieldName> is not a valid identifier of Java and cannot be used as field name",
Expand Down Expand Up @@ -3364,6 +3341,34 @@
],
"sqlState" : "42601"
},
"INVALID_WITHIN_GROUP_EXPRESSION" : {
"message" : [
"Invalid function <funcName> with WITHIN GROUP."
],
"subClass" : {
"DISTINCT_UNSUPPORTED" : {
"message" : [
"The function does not support DISTINCT with WITHIN GROUP."
]
},
"MISMATCH_WITH_DISTINCT_INPUT" : {
"message" : [
"The function is invoked with DISTINCT and WITHIN GROUP but expressions <funcArg> and <orderingExpr> do not match. The WITHIN GROUP ordering expression must be picked from the function inputs."
]
},
"WITHIN_GROUP_MISSING" : {
"message" : [
"WITHIN GROUP is required for the function."
]
},
"WRONG_NUM_ORDERINGS" : {
"message" : [
"The function requires <expectedNum> orderings in WITHIN GROUP but got <actualNum>."
]
}
},
"sqlState" : "42K0K"
},
"INVALID_WRITER_COMMIT_MESSAGE" : {
"message" : [
"The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>."
Expand Down
8 changes: 7 additions & 1 deletion python/pyspark/sql/tests/test_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,13 @@ def test_function_parity(self):
missing_in_py = jvm_fn_set.difference(py_fn_set)

# Functions that we expect to be missing in python until they are added to pyspark
expected_missing_in_py = set()
expected_missing_in_py = {
# TODO(SPARK-50220): listagg functions will soon be added and removed from this list
"listagg_distinct",
"listagg",
"string_agg",
"string_agg_distinct",
}

self.assertEqual(
expected_missing_in_py, missing_in_py, "Missing functions in pyspark not as expected"
Expand Down
71 changes: 71 additions & 0 deletions sql/api/src/main/scala/org/apache/spark/sql/functions.scala
Original file line number Diff line number Diff line change
Expand Up @@ -1147,6 +1147,77 @@ object functions {
*/
def sum_distinct(e: Column): Column = Column.fn("sum", isDistinct = true, e)

/**
* Aggregate function: returns the concatenation of non-null input values.
*
* @group agg_funcs
* @since 4.0.0
*/
def listagg(e: Column): Column = Column.fn("listagg", e)

/**
* Aggregate function: returns the concatenation of non-null input values, separated by the
* delimiter.
*
* @group agg_funcs
* @since 4.0.0
*/
def listagg(e: Column, delimiter: Column): Column = Column.fn("listagg", e, delimiter)
Copy link
Member

@yaooqinn yaooqinn Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Declaring the delimiter as String here can improve UX a bit. Since it only allows foldable string literals, we can rely on the compiler instead of runtime errors, WDYT @cloud-fan

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM


/**
* Aggregate function: returns the concatenation of distinct non-null input values.
*
* @group agg_funcs
* @since 4.0.0
*/
def listagg_distinct(e: Column): Column = Column.fn("listagg", isDistinct = true, e)

/**
* Aggregate function: returns the concatenation of distinct non-null input values, separated by
* the delimiter.
*
* @group agg_funcs
* @since 4.0.0
*/
def listagg_distinct(e: Column, delimiter: Column): Column =
Column.fn("listagg", isDistinct = true, e, delimiter)

/**
* Aggregate function: returns the concatenation of non-null input values. Alias for `listagg`.
*
* @group agg_funcs
* @since 4.0.0
*/
def string_agg(e: Column): Column = Column.fn("string_agg", e)

/**
* Aggregate function: returns the concatenation of non-null input values, separated by the
* delimiter. Alias for `listagg`.
*
* @group agg_funcs
* @since 4.0.0
*/
def string_agg(e: Column, delimiter: Column): Column = Column.fn("string_agg", e, delimiter)

/**
* Aggregate function: returns the concatenation of distinct non-null input values. Alias for
* `listagg`.
*
* @group agg_funcs
* @since 4.0.0
*/
def string_agg_distinct(e: Column): Column = Column.fn("string_agg", isDistinct = true, e)

/**
* Aggregate function: returns the concatenation of distinct non-null input values, separated by
* the delimiter. Alias for `listagg`.
*
* @group agg_funcs
* @since 4.0.0
*/
def string_agg_distinct(e: Column, delimiter: Column): Column =
Column.fn("string_agg", isDistinct = true, e, delimiter)

/**
* Aggregate function: alias for `var_samp`.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2782,6 +2782,9 @@ class Analyzer(override val catalogManager: CatalogManager) extends RuleExecutor
ne
case e: Expression if e.foldable =>
e // No need to create an attribute reference if it will be evaluated as a Literal.
case e: SortOrder =>
// For SortOder just recursively extract the from child expression.
e.copy(child = extractExpr(e.child))
case e: NamedArgumentExpression =>
// For NamedArgumentExpression, we extract the value and replace it with
// an AttributeReference (with an internal column name, e.g. "_w0").
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import org.apache.spark.sql.AnalysisException
import org.apache.spark.sql.catalyst.ExtendedAnalysisException
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.expressions.SubExprUtils._
import org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, AggregateFunction, Median, PercentileCont, PercentileDisc}
import org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, AggregateFunction, ListAgg, Median, PercentileCont, PercentileDisc}
import org.apache.spark.sql.catalyst.optimizer.{BooleanSimplification, DecorrelateInnerQuery, InlineCTE}
import org.apache.spark.sql.catalyst.plans._
import org.apache.spark.sql.catalyst.plans.logical._
Expand Down Expand Up @@ -423,10 +423,23 @@ trait CheckAnalysis extends PredicateHelper with LookupCatalog with QueryErrorsB
"funcName" -> toSQLExpr(wf),
"windowExpr" -> toSQLExpr(w)))

case agg @ AggregateExpression(listAgg: ListAgg, _, _, _, _)
if agg.isDistinct && listAgg.needSaveOrderValue =>
throw QueryCompilationErrors.functionAndOrderExpressionMismatchError(
listAgg.prettyName, listAgg.child, listAgg.orderExpressions)

case w: WindowExpression =>
// Only allow window functions with an aggregate expression or an offset window
// function or a Pandas window UDF.
w.windowFunction match {
case agg @ AggregateExpression(fun: ListAgg, _, _, _, _)
// listagg(...) WITHIN GROUP (ORDER BY ...) OVER (ORDER BY ...) is unsupported
if fun.orderingFilled && (w.windowSpec.orderSpec.nonEmpty ||
w.windowSpec.frameSpecification !=
SpecifiedWindowFrame(RowFrame, UnboundedPreceding, UnboundedFollowing)) =>
agg.failAnalysis(
errorClass = "INVALID_WINDOW_SPEC_FOR_AGGREGATION_FUNC",
messageParameters = Map("aggFunc" -> toSQLExpr(agg.aggregateFunction)))
case agg @ AggregateExpression(
_: PercentileCont | _: PercentileDisc | _: Median, _, _, _, _)
if w.windowSpec.orderSpec.nonEmpty || w.windowSpec.frameSpecification !=
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -506,6 +506,8 @@ object FunctionRegistry {
expression[CollectList]("collect_list"),
expression[CollectList]("array_agg", true, Some("3.3.0")),
expression[CollectSet]("collect_set"),
expression[ListAgg]("listagg"),
expression[ListAgg]("string_agg", setAlias = true),
expressionBuilder("count_min_sketch", CountMinSketchAggExpressionBuilder),
expression[BoolAnd]("every", true),
expression[BoolAnd]("bool_and"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,18 +128,15 @@ class FunctionResolution(
numArgs: Int,
u: UnresolvedFunction): Expression = {
func match {
case owg: SupportsOrderingWithinGroup if u.isDistinct =>
throw QueryCompilationErrors.distinctInverseDistributionFunctionUnsupportedError(
owg.prettyName
)
case owg: SupportsOrderingWithinGroup if !owg.isDistinctSupported && u.isDistinct =>
throw QueryCompilationErrors.distinctWithOrderingFunctionUnsupportedError(owg.prettyName)
case owg: SupportsOrderingWithinGroup
if !owg.orderingFilled && u.orderingWithinGroup.isEmpty =>
throw QueryCompilationErrors.inverseDistributionFunctionMissingWithinGroupError(
owg.prettyName
)
if owg.isOrderingMandatory && !owg.orderingFilled && u.orderingWithinGroup.isEmpty =>
throw QueryCompilationErrors.functionMissingWithinGroupError(owg.prettyName)
case owg: SupportsOrderingWithinGroup
if owg.orderingFilled && u.orderingWithinGroup.nonEmpty =>
throw QueryCompilationErrors.wrongNumOrderingsForInverseDistributionFunctionError(
// e.g mode(expr1) within group (order by expr2) is not supported
throw QueryCompilationErrors.wrongNumOrderingsForFunctionError(
owg.prettyName,
0,
u.orderingWithinGroup.length
Expand Down Expand Up @@ -198,7 +195,7 @@ class FunctionResolution(
case agg: AggregateFunction =>
// Note: PythonUDAF does not support these advanced clauses.
if (agg.isInstanceOf[PythonUDAF]) checkUnsupportedAggregateClause(agg, u)
// After parse, the inverse distribution functions not set the ordering within group yet.
// After parse, the functions not set the ordering within group yet.
val newAgg = agg match {
case owg: SupportsOrderingWithinGroup
if !owg.orderingFilled && u.orderingWithinGroup.nonEmpty =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -183,14 +183,16 @@ case class Mode(
}

override def orderingFilled: Boolean = child != UnresolvedWithinGroup
override def isOrderingMandatory: Boolean = true
override def isDistinctSupported: Boolean = false

assert(orderingFilled || (!orderingFilled && reverseOpt.isEmpty))

override def withOrderingWithinGroup(orderingWithinGroup: Seq[SortOrder]): AggregateFunction = {
child match {
case UnresolvedWithinGroup =>
if (orderingWithinGroup.length != 1) {
throw QueryCompilationErrors.wrongNumOrderingsForInverseDistributionFunctionError(
throw QueryCompilationErrors.wrongNumOrderingsForFunctionError(
nodeName, 1, orderingWithinGroup.length)
}
orderingWithinGroup.head match {
Expand Down
Loading