Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for storage S3Queue #54422

Merged
merged 42 commits into from Oct 18, 2023
Merged

Fixes for storage S3Queue #54422

merged 42 commits into from Oct 18, 2023

Conversation

kssenii
Copy link
Member

@kssenii kssenii commented Sep 7, 2023

Changelog category (leave one):

  • Backward Incompatible Change

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Rewrited storage S3Queue completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added system.s3queue and system.s3queue_log tables.
Closes #54998.

@robot-clickhouse robot-clickhouse added the pr-backward-incompatible Pull request with backwards incompatible changes label Sep 7, 2023
@robot-clickhouse
Copy link
Member

robot-clickhouse commented Sep 7, 2023

This is an automated comment for commit f90e31e with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
CI runningA meta-check that indicates the running CI. Normally, it's in success or pending state. The failed status indicates some problems with the PR✅ success
ClickHouse build checkBuilds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The cmake options can be found in the build log, grepping for cmake. Use these options and follow the general build process✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker image for serversThe check to build and optionally push the mentioned image to docker hub✅ success
Docs CheckBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integrational tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Mergeable CheckChecks if all other necessary checks are successful✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Push to DockerhubThe check for building and pushing the CI related docker images to docker hub✅ success
SQLTestThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
SQLancerFuzzing tests that detect logical bugs with SQLancer tool✅ success
SqllogicRun clickhouse on the sqllogic test set against sqlite and checks that all statements are passed✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style CheckRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success
Check nameDescriptionStatus
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests❌ failure
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc❌ failure

@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-status-⏳ PR with some pending statuses label Sep 14, 2023
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added pr-status-❌ PR with some error/faliure statuses and removed pr-status-⏳ PR with some pending statuses labels Sep 14, 2023
@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added pr-status-⏳ PR with some pending statuses pr-status-❌ PR with some error/faliure statuses and removed pr-status-❌ PR with some error/faliure statuses pr-status-⏳ PR with some pending statuses labels Sep 15, 2023
Comment on lines +704 to +706
/// Is is possible that we created an ephemeral processing node
/// but session expired and someone other created an ephemeral processing node.
/// To avoid deleting this new node, check processing_id.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't quite understand when it's possible. Can you explain with more details?

Copy link
Member Author

@kssenii kssenii Oct 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let' say we have clickhouse-server1 and clickhouse-server2, create S3Queue table on both servers pointing to the same keeper path. Let's say server 1 started processing file1, but got keeper session expired before finishing. Expired session leads to expiration of ephemeral "processing" node. Server 2 sees that there is no "processing" nor "failed" nor "processed" node for file1, so it will create a "processing" node itself for file1. Then server1 restores keeper connection, it knows from memory that it was processing file1 and did not finalize the state in keeper. So it try to finalize it with either "failed" or "finished" state. But it will be incorrect if the described scenario.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we need to think about deduplication here. If I understood correctly, the solution is to modify read offset in zk (or read number of rows, see my comments below) after each read block in source, so if we cannot modify it because of session expired, we don't insert this block. Right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, also there is a case that if we will update the counter in keeper after we read the block and before we pushed the block to mv, then there could be an exception during push and we get incorrect state in keeper, then to make things even worse - keeper session expires - so we cannot fix keeper state without a potential race with another server having s3queue starting processing the same file, so it is again not straightforward what to do here.

So (at least for now) I'd better document that the user is strongly recommended to use destination table of S3Queue MV with table engine which supports deduplication. Then occasional duplicated is not a problem.

Comment on lines +138 to +139
/// Anyway we cannot do anything in case of SIGTERM, so destination table must anyway support deduplication,
/// so here we will rely on it here as well.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also in case of exception during parsing we could already insert some data into destination table and after retries there will be duplications. We should defenitely add a note about it in the documentation. And I don't think we can solve it by saving file offsets that we processed because of formats with metadata and random access like Parquet/ORC/Arrow. Maybe we can store processed rows for each file and when we start reading the file again, we can skip already processed rows by just ignoring them here after reader->pull() (also in future we can add optimization for it and add method skipRows(size_t rows) for input formats because most formats can skip rows fast enough). Or combine both methods (saving offset and saving processed rows) and use one of them depending on format (like, for CSV/TSV/JSONEachRow/etc we can save offset and start reading from it, and for Parquet/ORC/etc we can save previrously read rows and skip them before returning blocks (and for such formats skipping rows can be optimized using their metadata).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can store processed rows for each file and when we start reading the file again, we can skip already processed rows by just ignoring them here after reader->pull() (also in future we can add optimization for it and add method skipRows(size_t rows) for input formats because most formats can skip rows fast enough).

Good idea. Though if we update processed rows count in keeper for every block, it will result in too many keeper requests...

We should defenitely add a note about it in the documentation.

Will do 👌🏻 .

Copy link
Member

@Avogar Avogar Oct 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though if we update processed rows count in keeper for every block, it will result in too many keeper requests...

Yeah, for sure. Maybe we can do it under a setting disabled by default and say to the users that deduplication can result in too many keeper requests. Or we can try to find other solution (maybe we can ask other opinions in a weekly meeting or in dev chat)

Comment on lines +515 to +516
/// This is possible to achieve in case of parallel processing
/// but for local processing we explicitly disable parallel mode and do everything in a single thread
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also create two S3Queue storages with the same zk path on one instance?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have other problems except retries with parallel processing in Ordered mode?
If only retries, maybe instead of setting s3queue_processing_threads_num always to 1, we can set it to 1 only if retries were enabled? Or even throw an exception if retries are enabled in Ordered mode with a message that it can be broken in distributed scenario (and allow to force retries with ordered mode, so we will set threads_num to 1 and won't care about distributed case). Or maybe you already know how to solve this problem and we can just keep it as is and fix later.

Copy link
Member Author

@kssenii kssenii Oct 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe you already know how to solve this problem and we can just keep it as is and fix later.

Yeah, I have an idea. I think this can be solved if ordered mode was implemented as a combo of ordered and unordered modes. Let's say there is a setting processing_window, equal to for, example, 100. This window defines a range of files. Within this window we process files in parallel and do not process any other files until all files within this window are marked as either unretriably failed or processed. While we process a window of files we keep track of them in keeper like we track files in unordered mode. But once the full window is processed, we change the information in keeper to contain only max_processed_file (what we aim to do in ordered mode). And go to the next window afterwards.

Do we have other problems except retries with parallel processing in Ordered mode?

If server got terminated with, for example, SIGABRT and we did not finish processing file file1, but another thread already processed file2, then on restart we'll think we already processed file1 (even though we didn't). But this issue can be also solved by my idea described above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I have an idea. ...

Started implementing it. I think it will be better to be put in a separate PR, after this one is merged, for easier review.

@kssenii
Copy link
Member Author

kssenii commented Oct 18, 2023

Integration tests (asan, analyzer) [1/6] — fail: 3

test_zookeeper_config
test_postgresql_replica_database_engine_2 - #55772

Integration tests (release) [4/4] — fail: 1

test_delayed_replica_failover/test.py::test

Integration tests (tsan) [1/6] — fail: 1

test_postgresql_replica_database_engine_2 - #55772

Stateless tests (release, analyzer) — fail: 1

02479_race_condition_between_insert_and_droppin_mv
00992_system_parts_race_condition_zookeeper_long

@kssenii
Copy link
Member Author

kssenii commented Oct 18, 2023

Integration tests (release) [4/4] — fail: 1

test_delayed_replica_failover/test.py::test

Stateless tests (release, DatabaseReplicated) [4/4] — fail: 1

00385_storage_file_and_clickhouse-local_app_long

Stateless tests (release, analyzer) — fail: 1

02479_race_condition_between_insert_and_droppin_mv

@kssenii kssenii merged commit 4e0122a into ClickHouse:master Oct 18, 2023
276 of 281 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backward-incompatible Pull request with backwards incompatible changes pr-status-❌ PR with some error/faliure statuses
Projects
None yet
Development

Successfully merging this pull request may close these issues.

S3Queue is producing 1k+ ListBlob calls per second
6 participants