Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix adding sub-second intervals to DateTime #53309

Merged
merged 1 commit into from Aug 21, 2023
Merged

Fix adding sub-second intervals to DateTime #53309

merged 1 commit into from Aug 21, 2023

Conversation

al13n321
Copy link
Member

Changelog category (leave one):

  • Bug Fix (user-visible misbehavior in an official stable release)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Fixed adding intervals of a fraction of a second to DateTime producing incorrect result.

Closes #45779

:) select toDateTime('2000-01-01') + interval 1 millisecond

Before:

┌─plus(toDateTime('2000-01-01'), toIntervalMillisecond(0))─┐
│                                      2026-10-14 16:21:20 │
└──────────────────────────────────────────────────────────┘

After:

┌─plus(toDateTime('2000-01-01'), toIntervalMillisecond(1))─┐
│                                  2000-01-01 00:00:00.001 │
└──────────────────────────────────────────────────────────┘

It was a simple bug in AddNanosecondsImpl/etc - it was returning a wrong type.


Along the way added a check for this overflow (very incomplete, other overflows remain unchecked):

:) select toDateTime64('3000-01-01 12:00:00.12345', 0) + interval 0 nanosecond

Before:

┌─plus(toDateTime64('3000-01-01 12:00:00.12345', 0), toIntervalNanosecond(0))─┐
│                                               1900-01-01 00:00:00.290448384 │
└─────────────────────────────────────────────────────────────────────────────┘

After:

Received exception:
Code: 407. DB::Exception: Decimal math overflow: While processing toDateTime64('3000-01-01 12:00:00.12345', 0) + toIntervalNanosecond(0). (DECIMAL_OVERFLOW)

And made incompatible types get rejected during typechecking rather than execution:

:) select toTypeName(materialize(toDate('2000-01-01')) + interval 1 nanosecond)

Before:

┌─toTypeName(plus(materialize(toDate('2000-01-01')), toIntervalNanosecond(1)))─┐
│ DateTime64(9)                                                                │
└──────────────────────────────────────────────────────────────────────────────┘

After:

Received exception:
Code: 43. DB::Exception: addNanoseconds cannot be used with Date: While processing toTypeName(materialize(toDate('2000-01-01')) + toIntervalNanosecond(1)). (ILLEGAL_TYPE_OF_ARGUMENT)

@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-bugfix Pull request with bugfix, not backported by default label Aug 11, 2023
@robot-ch-test-poll4
Copy link
Contributor

robot-ch-test-poll4 commented Aug 11, 2023

This is an automated comment for commit 690b0e9 with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help🟢 success
Bugfix validate checkChecks that either a new test (functional or integration) or there some changed tests that fail with the binary built on master branch🟢 success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟢 success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🟢 success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🟢 success
Mergeable CheckChecks if all other necessary checks are successful🟢 success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🟢 success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc🟢 success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors🔴 failure
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts🟢 success

@antonio2368 antonio2368 self-assigned this Aug 11, 2023
Comment on lines 65 to 66
chassert(false);
return 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe throw a logical error in cases like this?
Returning 0 seems strange on something that shouldn't be called + it could make it harder to debug.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that sounds better, changing.

@al13n321
Copy link
Member Author

Stress test (debug): #53454

@al13n321 al13n321 merged commit a5fbac9 into master Aug 21, 2023
276 of 277 checks passed
@al13n321 al13n321 deleted the int branch August 21, 2023 19:33
@al13n321
Copy link
Member Author

The new test is flaky:

Wow, a simple toDateTime('2000-01-01 12:00:00') + INTERVAL 1234567 SECOND produced a result that's off by exactly one hour. Must be something related to timezones. But what exactly? The only thing I can think of is session's or server's timezone changing between the toDateTime execution and the formatting of the results; but AFAICT server timezone can't change at runtime and session timezone can't change without the test doing any queries to change it. What else can it be? Am I misunderstanding how timezones work in CH?

@tavplubix
Copy link
Member

Maybe this is the culprit:

"session_timezone": lambda: random.choice(
[
# special non-deterministic around 1970 timezone, see [1].
#
# [1]: https://github.com/ClickHouse/ClickHouse/issues/42653
"America/Mazatlan",
"America/Hermosillo",
"Mexico/BajaSur",
# server default that is randomized across all timezones
# NOTE: due to lots of trickery we cannot use empty timezone here, but this should be the same.
get_localzone(),
]

@al13n321
Copy link
Member Author

Ah, turns out the real timezone randomization is in docker/test/stateless/run.sh rather than tests/clickhouse-test, that's why I couldn't reproduce it locally. Fix in #53906

@tavplubix tavplubix mentioned this pull request Aug 29, 2023
@nikitamikhaylov nikitamikhaylov added pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud labels Sep 25, 2023
@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Sep 25, 2023
robot-clickhouse-ci-2 added a commit that referenced this pull request Sep 25, 2023
Backport #53309 to 23.3: Fix adding sub-second intervals to DateTime
al13n321 pushed a commit that referenced this pull request Sep 26, 2023
…54983)

Co-authored-by: robot-clickhouse <robot-clickhouse@users.noreply.github.com>
al13n321 pushed a commit that referenced this pull request Sep 26, 2023
…54981)

Co-authored-by: robot-clickhouse <robot-clickhouse@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-backports-created-cloud pr-bugfix Pull request with bugfix, not backported by default pr-must-backport Pull request should be backported intentionally. Use this label with great care! pr-must-backport-cloud
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adding intervals of a fraction of a second to a datetime gives incorrect result
6 participants