Skip to content

[SPARK-28333][SQL] NULLS FIRST for DESC and NULLS LAST for ASC #25147

Closed
shivusondur wants to merge 1 commit intoapache:masterfrom
shivusondur:jira28333
Closed

[SPARK-28333][SQL] NULLS FIRST for DESC and NULLS LAST for ASC #25147
shivusondur wants to merge 1 commit intoapache:masterfrom
shivusondur:jira28333

Conversation

@shivusondur
Copy link
Contributor

What changes were proposed in this pull request?

changed the default null ordering for ACS and DESC, and updated the corresponding tests

How was this patch tested?

Ran sql/catalyst module UT and updated the test case

changed the default null ordering for ACS and DESC, and updated the corresponding tests
@AmplabJenkins
Copy link

Can one of the admins verify this patch?

case object Descending extends SortDirection {
override def sql: String = "DESC"
override def defaultNullOrdering: NullOrdering = NullsLast
override def defaultNullOrdering: NullOrdering = NullsFirst
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a breaking change and workaround looks pretty easy. Can you check what other DBMSs do? If it's consistent with some other DBMSs, I think we shouldn't fix it and resolve the ticket as Won't Do

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HyukjinKwon
From the below link, spark is consistent with mysql, sqlserver
But inconsitent with DB2, Oracle, Postgresql
https://docs.mendix.com/refguide/null-ordering-behavior#3-overview-of-default-nulls-sort-order

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current behaviour already makes sense. I wouldn't fix it. cc @dongjoon-hyun, @wangyum, @gatorsmile WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for @HyukjinKwon 's comment.

@dongjoon-hyun
Copy link
Member

dongjoon-hyun commented Jul 15, 2019

This is introduced at 2.1.0 via #14842 . And, we are followinging Hive's behavior. We cannot change this default behavior due to the reason @HyukjinKwon 's pointed out .

@shivusondur . We can make a configuration for this instead if needed.

Could you make a decision, @gatorsmile , whether closing this issue as Won't Do or redirecting this issue to the conf, please?

@shivusondur
Copy link
Contributor Author

shivusondur commented Jul 19, 2019

This is introduced at 2.1.0 via #14842 . And, we are followinging Hive's behavior. We cannot change this default behavior due to the reason @HyukjinKwon 's pointed out .

@shivusondur . We can make a configuration for this instead if needed.

Could you make a decision, @gatorsmile , whether closing this issue as Won't Do or redirecting this issue to the conf, please?

@gatorsmile
Let me know your comment for the same.

@dongjoon-hyun
Copy link
Member

Hi, @shivusondur . This is a tough call. Although Spark 3.0.0 is a good chance for this, let's close this PR and the issue as Won't Do for now since there is no positive comments.

@HyukjinKwon
Copy link
Member

Yup, let's just close - I don't particularly think this is worth, and no positive comments from others as pointed out.

@gatorsmile
Copy link
Member

gatorsmile commented Aug 2, 2019

@shivusondur Can you submit a PR to document our current behavior?

@gatorsmile
Copy link
Member

Create a subtask in https://issues.apache.org/jira/browse/SPARK-28588?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

Comments