Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize count from data in most formats, better work with _file/_path virtual columns #53174

Closed
wants to merge 9 commits into from

Conversation

Avogar
Copy link
Member

@Avogar Avogar commented Aug 8, 2023

Changelog category (leave one):

  • Performance Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Optimize count from files in most input formats. Don't read actual data when only _file/_path columns are requested and just count the number of rows. Use filter on _file/_path before reading data in file/url/hdfs functions, fix issues with _path/_file virtual columns. Use cache for number of rows in files that checks file last modification time (just like schema inference cache). Optimize group by with all constant keys (optimizes queries like select count() from file(...) group by _file/_path).
Closes #44334

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

@robot-ch-test-poll2 robot-ch-test-poll2 added the pr-performance Pull request with some performance improvements label Aug 8, 2023
@robot-ch-test-poll2
Copy link
Contributor

robot-ch-test-poll2 commented Aug 8, 2023

This is an automated comment for commit 98706ce with description of existing statuses. It's updated for the latest CI running
The full report is available here
The overall status of the commit is 🔴 failure

Check nameDescriptionStatus
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR🟡 pending
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process🔴 failure
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help🟢 success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub🟢 success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here🟢 success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc🟢 success
Install packagesChecks that the built packages are installable in a clear environment🟢 success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests🔴 failure
Mergeable CheckChecks if all other necessary checks are successful🔴 failure
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests🔴 failure
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub🟢 success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS🟢 success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool🟢 success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed🟢 success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report🟢 success
Unit testsRuns the unit tests for different release types🟢 success

@robot-clickhouse robot-clickhouse added the submodule changed At least one submodule changed in this PR. label Aug 9, 2023
@danthegoodman1
Copy link

Use filter on _file/_path before reading data in file/url/hdfs functions

Just for clarity, and I believe this is the case, this is already implemented in the s3 table function correct?

@Avogar
Copy link
Member Author

Avogar commented Aug 9, 2023

Just for clarity, and I believe this is the case, this is already implemented in the s3 table function correct?

Yes, for s3 and azureBlobStorage it already worked. I also implemented it for file/url/hdfs

@Avogar
Copy link
Member Author

Avogar commented Aug 17, 2023

I think I will split this PR into 4 separate PRs:

  1. Apply filter by files in file/hdfs/url table engines + fixes around virtual coliumns + refactor to avoid code duplication: Use filter by file/path before reading in url/file/hdfs table functins #53529
  2. Faster count from all formats Optimize count from files in most input formats #53637
  3. Cache for number of rows Cache number of rows in files for count in file/s3/url/hdfs/azure functions #53692
  4. Group by constant keys optimization Optimize group by constant keys #53549

@Avogar Avogar closed this Aug 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-performance Pull request with some performance improvements submodule changed At least one submodule changed in this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Parquet files should be able to count from metadata
4 participants