-
Notifications
You must be signed in to change notification settings - Fork 28.7k
[SPARK-52656][SQL] Fix current_time() #51351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/timeExpressions.scala
Outdated
Show resolved
Hide resolved
@cloud-fan @the-sakthi Could you review this bug fix, please. |
@dongjoon-hyun @LuciferYang Could you review this PR, please. |
* | ||
* For example, if `p = 3`, we keep millisecond resolution and discard any digits beyond the | ||
* thousand-microsecond place. So a value like `123456` microseconds (12:34:56.123456) becomes | ||
* thousand-nanosecond place. So a value like `123456` microseconds (12:34:56.123456) becomes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a value like `123456` microseconds (12:34:56.123456) becomes
* `123000` microseconds (12:34:56.123).
Is it still "microseconds" in this comment? Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internally, it is nanoseconds, but the example is correct from logical view. When you truncate 123456
microseconds with precision 6 to the precision 3, you get 123000
microseconds (internally 123000000 nanoseconds).
Lgtm! |
@@ -52,6 +52,7 @@ object TimeType { | |||
val MICROS_PRECISION: Int = 6 | |||
val MAX_PRECISION: Int = MICROS_PRECISION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is MAX_PRECISION
equals NANOS_PRECISION
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet. From user perspective, Spark SQL supports time precisions up to microseconds so far.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Get it.
Merging to master. Thank you, @the-sakthi @cloud-fan @LuciferYang @beliefer for review. |
What changes were proposed in this pull request?
After switching of internal representation of TIME values from microseconds to nanoseconds by the PR #51156, the
current_time(p)
returns incorrect result. In the PR, I propose to fix two datetime functionstruncateTimeToPrecision()
andinstantToNanosOfDay()
, and apparently the expressionCurrentTime
.Why are the changes needed?
It fixes incorrect behaviour of the
current_time()
function.Does this PR introduce any user-facing change?
No, the TIME data type and related functions haven't been released yet.
How was this patch tested?
By running new tests:
and by manual testing:
Was this patch authored or co-authored using generative AI tooling?
No.