Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Granular code coverage with introspection #56102

Merged
merged 16 commits into from Nov 16, 2023
Merged

Granular code coverage with introspection #56102

merged 16 commits into from Nov 16, 2023

Conversation

alexey-milovidov
Copy link
Member

@alexey-milovidov alexey-milovidov commented Oct 29, 2023

Changelog category (leave one):

  • Build/Testing/Packaging Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Add a new build option SANITIZE_COVERAGE. If it is enabled, the code is instrumented to track the coverage. The collected information is available inside ClickHouse with: (1) a new function coverage that returns an array of unique addresses in the code found after the previous coverage reset; (2) SYSTEM RESET COVERAGE query that resets the accumulated data. This allows us to compare the coverage of different tests, including differential code coverage. Continuation of #20539.

Example: https://pastila.nl/?00013e20/348ab745ec310eeb1994a3cca223db8e#2WCaoENmvHHK539A2WX6jQ==

@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added pr-build Pull request with build/testing/packaging improvement submodule changed At least one submodule changed in this PR. labels Oct 29, 2023
@robot-clickhouse-ci-1
Copy link
Contributor

robot-clickhouse-ci-1 commented Oct 29, 2023

This is an automated comment for commit 13a6f88 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure

@alexey-milovidov
Copy link
Member Author

alexey-milovidov commented Oct 29, 2023

I'm considering two implementation options:

  1. Global coverage data. It will require running the tests sequentially, but it will give more complete information about what the server was doing during the test.
  2. Per-thread coverage data, flushed into a system table. It will allow running the tests in parallel, but will only collect the data from threads attributed to queries (no data from background operations).

It looks like the first option is better for actual code coverage.

I'm also considering various details of the coverage data:

  1. Unique addresses seen in the code - the most lightweight option.
  2. Unique addresses seen in the code with the counter, how many times they were seen - for profiling.
  3. Full trace, including every position in the code, possibly with thread ids and timestamps. It is the heaviest option, but can be good for in-depth analysis of code behavior. But tracing every basic block is worse than tracing only large enough functions, see What is LLVM XRay, just wondering? #34160

I'm also considering possible extensions of this instrumentation:

  1. For example, we can allow dynamic fail-points for selected functions. But it will be better with What is LLVM XRay, just wondering? #34160

@alexey-milovidov
Copy link
Member Author

alexey-milovidov commented Oct 29, 2023

@alexey-milovidov
Copy link
Member Author

@alexey-milovidov
Copy link
Member Author

I'm thinking about where to enable it - either it will be another build type (release+coverage) and another test run, or it will be included in the debug build.

@alexey-milovidov
Copy link
Member Author

It will require running the tests sequentially

We can run tests in parallel with coverage using multiple clickhouse-server processes.

@alexey-milovidov
Copy link
Member Author

Example of unique lines covered by a test:

WITH '00647_histogram_negative.sql' AS name
SELECT addressToLine(arrayJoin(coverage) AS addr)
FROM system.coverage
WHERE (test_name = name) AND (addr NOT IN (
    SELECT arrayJoin(coverage) AS addr
    FROM system.coverage
    WHERE test_name != name
))

Query id: c3d198cd-4a93-4585-b746-2c5653f59bfe

┌─addressToLine(arrayJoin(coverage))──────────────────────────────────┐
│ ./build/./src/AggregateFunctions/AggregateFunctionHistogram.h:59    │
│ ./build/./contrib/llvm-project/libcxx/include/__algorithm/comp.h:73 │
│ ./build/./src/AggregateFunctions/AggregateFunctionHistogram.h:351   │
└─────────────────────────────────────────────────────────────────────┘

3 rows in set. Elapsed: 0.239 sec.

@alexey-milovidov
Copy link
Member Author

Tests sorted by unique places in code, not covered by any other tests:
https://pastila.nl/?0002b5fa/a52ccac38845d436b6eb86baeb22e014#N51wuWHXYZ1kYNcBPth+EA==
(note: this is incomplete, the tests are still in progress)

@alexey-milovidov
Copy link
Member Author

client, local, and other tools should dump the coverage into a file for subsequent insertion - we can control it with an environment variable.

@alexey-milovidov
Copy link
Member Author

Functional tests can finish in several hours in sequential mode - this is passable.

@maxknv
Copy link
Member

maxknv commented Oct 30, 2023

I'm thinking about where to enable it - either it will be another build type (release+coverage) and another test run, or it will be included in the debug build.

At this point, is the goal to have a real "test coverage map" in the CH database? For the sake of CI jobs simplicity I'd steer towards separate job (build/run). Maybe use it for master workflow only would be sufficient, but this of course depends on our needs.

@@ -1173,6 +1173,22 @@ class TestCase:
description_full += result.reason.value

description_full += result.description

if BuildFlags.SANITIZE_COVERAGE in args.build_flags:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is args defined here?

after 5 minutes of jumping around the code

Oh well.. it's very obfuscated, where it's global, local, and where it has a completely different meaning than anywhere else in the file. But yes, it's definied globally in if __name__ == "main"

@alexey-milovidov
Copy link
Member Author

Having a separate build job and check will be reasonable. It will be good if it runs for PRs because we can report if the coverage increased or decreased, the newly covered lines of code, etc. It will require functional integration and, possibly, unit tests. It could be nice to have it in Fuzzer and Stress tests, but for that purpose, we can reuse one of the existing jobs, e.g., replace release to release+coverage.

There are different goals for coverage:

  1. Show a report with test coverage by files for a nice number (e.g. it was ~85% by lines by functional tests last time we collected a real report).

But it will require a few more steps:

  • for attributing addresses to lines, we have to call addressToLineWithInlines for all addresses, and it could take +1 hour easily; addressToLineWithInlines because in the release build, the line in our code is often attributed to inlined code somewhere in, e.g. std::vector; we can also compile without inlining for coverage;
  • the coverage can be calculated by "edges", "basic blocks", "functions", "lines", and "files"; currently, the code is instrumented on edges, which is more granular than basic blocks, and more granular than functions; but the usual report reports by lines; it is easy to visualize every edge or basic block by highlighting the line, where it starts, so it will be good for visualization, but for calculating the coverage by lines, we need to know how many lines every basic block / edge spans.

Note: if we calculate coverage by edges (there could be many unique edges on a single line due to template instantiations), it will be quite low, maybe 30%.

We can calculate the coverage by symbols (functions, taking different template instantiations differently).

  1. Show some absolute or relative number to check if it grows or declines over time and in every pull request. Show the newly unique covered code and the new code that wasn't covered.

This will be easy to do.

  1. Calculate statistical properties of tests and sort tests by their relevance to the modified lines of code. Enrich documentation by the information about tests - e.g. this function is covered by these tests in order of relevance (by the way, this is already possible by using the used_functions in the query_log).

This will be easy to do.

@alexey-milovidov
Copy link
Member Author

alexey-milovidov commented Nov 16, 2023

Let's do this:

  1. We can merge this PR that adds new build options and support for them in clickhouse-test, but doesn't use it in CI.
  2. I will do subsequent PRs for lightweight coverage in other tools (clickhouse-client, clickhouse-format, etc).
  3. I will do subsequent PRs checking if we can enable coverage by default for some builds.

@alexey-milovidov alexey-milovidov self-assigned this Nov 16, 2023
@alexey-milovidov alexey-milovidov merged commit 482d8ca into master Nov 16, 2023
333 of 337 checks passed
@alexey-milovidov alexey-milovidov deleted the coverage branch November 16, 2023 22:23
@alexey-milovidov
Copy link
Member Author

I will do subsequent PRs checking if we can enable coverage by default for some builds.

This: #58792

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-build Pull request with build/testing/packaging improvement submodule changed At least one submodule changed in this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants