We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ran into this while adding result verification, the result produced by ClickHouse for Q3 appears to be incorrect - likely due to an internal overflow:
SELECT AVG(UserID) FROM hits; ┌─────────avg(UserID)─┐ │ -55945124888.916016 │ └─────────────────────┘
I'm not sure if this is intended behavior - it does not appear to be listed on the documentation.
Adding a cast to INT128 or DOUBLE fixes the problem:
INT128
DOUBLE
SELECT AVG(toInt128(UserID)) FROM hits; ┌─avg(toInt128(UserID))─┐ │ 2528953029789716000 │ └───────────────────────┘ SELECT AVG(CAST(UserID AS DOUBLE)) FROM hits; ┌─avg(CAST(UserID, 'DOUBLE'))─┐ │ 2528953029789716000 │ └─────────────────────────────┘
The text was updated successfully, but these errors were encountered:
Yes, it's by intended behavior. Duckdb uses hugeint_t for aggregation sum(int64_t) yet ClickHouse uses UInt64 for NearestFieldTypeImpl<UInt64>.
hugeint_t
sum(int64_t)
UInt64
NearestFieldTypeImpl<UInt64>
Sorry, something went wrong.
The docs have to be improved.
No branches or pull requests
Ran into this while adding result verification, the result produced by ClickHouse for Q3 appears to be incorrect - likely due to an internal overflow:
I'm not sure if this is intended behavior - it does not appear to be listed on the documentation.
Adding a cast to
INT128
orDOUBLE
fixes the problem:The text was updated successfully, but these errors were encountered: