Skip to content

Conversation

CheSema
Copy link
Member

@CheSema CheSema commented Aug 23, 2024

The rule: write buffer has to be either finalized or canceled.
One exception from this rule: destroying write buffer when uncaught exception rewind the stack.

The goal is to make things easier.
When we are sure that we always explicitly finalize the buffers, we could make all the buffers auto-cancelable at the d-tor. This reduces the number of code lines.

I made a lot of cases more simple when server tries to send the exception error to the client over HTTP.
The logic is really simple in that places:
We have some HTTP connection, it is usually wrapped inside WriteBufferFromHTTPServerResponse.
If we have not send anything to the socket yet, than we could do it gracefully by sending proper HTTP code, which indicates about error, proper headers with some details (X-ClickHouse-Exception-Code) and the full details with stack trace as a body. That makes sure that client sees the error correctly.
If clickhouse sends the data in the JSON format than the error is embedded to the message. The client has to parse it correctly.
If some data has been send and it is not 'JSON` format than we have the most dangerous situation. That partial data could be parsed as a full valid response on the client. We have to prevent this.

I made the HTTP client truly fail when server faces the error.
I achieved that by breaking the transport HTTP layer. Clickhouse sends the error details and the stack traces and closes the HTTP connection in a way that breaks internal HTTP protocol.
Clickhouse never answers with plain data unknown size. It does it ether with Context-Length header or with Transfer-Encoding: chunked header. For Transfer-Encoding: chunked Clickhouse does not send the last final empty chunk. For Context-Length Clickhouse makes sure that the socket is closed and the last byte (or more) is never sent.

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Write buffer has to be canceled or finalized explicitly. Exceptions break the HTTP protocol in order to alert the client about error.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

CI Settings (Only check the boxes if you know what you are doing):

  • Allow: All Required Checks
  • Allow: Stateless tests
  • Allow: Stateful tests
  • Allow: Integration Tests
  • Allow: Performance tests
  • Allow: All Builds
  • Allow: batch 1, 2 for multi-batch jobs
  • Allow: batch 3, 4, 5, 6 for multi-batch jobs

  • Exclude: Style check
  • Exclude: Fast test
  • Exclude: All with ASAN
  • Exclude: All with TSAN, MSAN, UBSAN, Coverage
  • Exclude: All with aarch64, release, debug

  • Run only fuzzers related jobs (libFuzzer fuzzers, AST fuzzers, etc.)
  • Exclude: AST fuzzers

  • Do not test
  • Woolen Wolfdog
  • Upload binaries for special builds
  • Disable merge-commit
  • Disable CI cache

@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-improvement Pull request with some product improvements label Aug 23, 2024
@robot-clickhouse
Copy link
Member

robot-clickhouse commented Aug 23, 2024

This is an automated comment for commit 7b37bdd with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
BuildsThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@CheSema CheSema force-pushed the chesema-merge-wb branch 2 times, most recently from 6465088 to 6283287 Compare September 6, 2024 13:42
@CheSema CheSema changed the title write buffer has to be canceled or finalized explicitly , no exceptions count [WIP] no auto finalization in write buffer destructors Sep 6, 2024
@CheSema CheSema force-pushed the chesema-merge-wb branch 6 times, most recently from ed7a3d8 to 950ec61 Compare September 12, 2024 13:18
@CheSema CheSema force-pushed the chesema-merge-wb branch 2 times, most recently from 8238141 to 53528ba Compare September 17, 2024 11:45
@CheSema
Copy link
Member Author

CheSema commented Sep 19, 2024

I found out that JSONColumnsBlockOutputFormatBase could have corrupted the memory.
In c-tor:

ostr = OutputFormatWithUTF8ValidationAdaptor::getWriteBufferPtr();

But the base class OutputFormatWithUTF8ValidationAdaptor reset that object in method resetFormatter.

validating_ostr = std::make_unique<WriteBufferValidUTF8>(*Base::getWriteBufferPtr());

JSONColumnsBlockOutputFormatBase is never reassigned ostrafter it is freed.

I fixed it here by introducing JSONColumnsBlockOutputFormatBase::resetFormatterImpl override call.

@CheSema CheSema force-pushed the chesema-merge-wb branch 2 times, most recently from 6327449 to e0236ef Compare September 23, 2024 12:18
@CheSema CheSema changed the title [WIP] no auto finalization in write buffer destructors [WIP] no auto write buffer finalization in destructors Sep 23, 2024
@CheSema CheSema force-pushed the chesema-merge-wb branch 3 times, most recently from f212ad8 to 3dc3fc5 Compare September 25, 2024 13:41
@CheSema CheSema changed the title [WIP] no auto write buffer finalization in destructors no auto write buffer finalization in destructors Sep 25, 2024
@CheSema CheSema marked this pull request as ready for review September 25, 2024 14:38
@alexkats alexkats self-assigned this Sep 25, 2024
@CheSema CheSema force-pushed the chesema-merge-wb branch 2 times, most recently from a9bb99d to a4ccfb5 Compare September 26, 2024 08:28
baibaichen added a commit to Kyligence/gluten that referenced this pull request Nov 28, 2024
baibaichen added a commit to Kyligence/gluten that referenced this pull request Nov 28, 2024
baibaichen added a commit to Kyligence/gluten that referenced this pull request Nov 28, 2024
baibaichen added a commit to Kyligence/gluten that referenced this pull request Nov 29, 2024
…tors](ClickHouse/ClickHouse#68800)

- Make LocalPartitionWriter::evictPartitions called, e.g. set(GlutenConfig.COLUMNAR_CH_SHUFFLE_SPILL_THRESHOLD.key, (1024*1024).toString)
baibaichen added a commit to apache/incubator-gluten that referenced this pull request Nov 30, 2024
* [GLUTEN-1632][CH]Daily Update Clickhouse Version (20241129)

* Fix Build AND UT Due to [Added cache for primary index](ClickHouse/ClickHouse#72102)

* Fix Build and UT due to [no auto write buffer finalization in destructors](ClickHouse/ClickHouse#68800)

- Make LocalPartitionWriter::evictPartitions called, e.g. set(GlutenConfig.COLUMNAR_CH_SHUFFLE_SPILL_THRESHOLD.key, (1024*1024).toString)

* Fix Build due to [Save several minutes of build time](ClickHouse/ClickHouse#72046)

* Fix Benchmark Build due to [Scatter blocks in hash join without copying](ClickHouse/ClickHouse#67782)

(cherry picked from commit 8d566d6a8b8785e4072ffd6f774eb83b07ac3d8d)

* Fix Benchmark Build

* Fix endless loop due to ClickHouse/ClickHouse#70598

* [Refactor #8100] using CHConf.setCHConfig()

* fix style

---------

Co-authored-by: kyligence-git <gluten@kyligence.io>
Co-authored-by: Chang Chen <baibaichen@gmail.com>
@k-morozov
Copy link
Contributor

Hi @CheSema . I found the header X-ClickHouse-Exception-Code in the response:

~$ curl -v -sS -i 'http://localhost:8123/?max_execution_time=0.1' -d "CREATE DATABASE test_db ON CLUSTER default" -H 'Content-Type: text/plain'
*   Trying 127.0.0.1:8123...
* Connected to localhost (127.0.0.1) port 8123 (#0)
...
> POST /?max_execution_time=0.1 HTTP/1.1
> Host: localhost:8123
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Type: text/plain
> Content-Length: 42
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 408 Request Time-out
HTTP/1.1 408 Request Time-out
< Date: Tue, 17 Dec 2024 12:20:25 GMT
Date: Tue, 17 Dec 2024 12:20:25 GMT
< Connection: Keep-Alive
Connection: Keep-Alive
< Content-Type: text/tab-separated-values; charset=UTF-8
Content-Type: text/tab-separated-values; charset=UTF-8
...
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< X-ClickHouse-Query-Id: ede6cd51-8aab-4c7e-a6e3-362ec6aa16eb
X-ClickHouse-Query-Id: ede6cd51-8aab-4c7e-a6e3-362ec6aa16eb
< X-ClickHouse-Format: TabSeparated
X-ClickHouse-Format: TabSeparated
...
< X-ClickHouse-Exception-Code: 159
X-ClickHouse-Exception-Code: 159
< Keep-Alive: timeout=3, max=9999
Keep-Alive: timeout=3, max=9999
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","result_rows":"0","result_bytes":"0","elapsed_ns":"103799739"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","result_rows":"0","result_bytes":"0","elapsed_ns":"103799739"}

< 
Code: 159. DB::Exception: Timeout exceeded: elapsed 0.10283398 seconds, maximum: 0.1. (TIMEOUT_EXCEEDED) (version 24.11.1.2557 (official build))
* Connection #0 to host localhost left intact

However, I didn't find it in another case:

~$ curl -v -sS -i 'http://localhost:8123/?max_execution_time=1' -d "CREATE DATABASE test_db ON CLUSTER default" -H 'Content-Type: text/plain'
*   Trying 127.0.0.1:8123...
* Connected to localhost (127.0.0.1) port 8123 (#0)
...
> POST /?max_execution_time=1 HTTP/1.1
> Host: localhost:8123
...
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Type: text/plain
> Content-Length: 42
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Tue, 17 Dec 2024 12:20:36 GMT
Date: Tue, 17 Dec 2024 12:20:36 GMT
< Connection: Keep-Alive
Connection: Keep-Alive
< Content-Type: text/tab-separated-values; charset=UTF-8
Content-Type: text/tab-separated-values; charset=UTF-8
...
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< X-ClickHouse-Query-Id: 9d8e0e5c-363a-4533-93fe-f179dda68b9a
X-ClickHouse-Query-Id: 9d8e0e5c-363a-4533-93fe-f179dda68b9a
< X-ClickHouse-Format: TabSeparated
X-ClickHouse-Format: TabSeparated
...
< Keep-Alive: timeout=3, max=9999
Keep-Alive: timeout=3, max=9999
< X-ClickHouse-Summary: {"read_rows":"2","read_bytes":"416","written_rows":"0","written_bytes":"0","total_rows_to_read":"3","result_rows":"0","result_bytes":"0","elapsed_ns":"174318132"}
X-ClickHouse-Summary: {"read_rows":"2","read_bytes":"416","written_rows":"0","written_bytes":"0","total_rows_to_read":"3","result_rows":"0","result_bytes":"0","elapsed_ns":"174318132"}

< 
host1	9440	82	Code: 82. DB::Exception: Database test_db already exists. (DATABASE_ALREADY_EXISTS) (version 24.11.1.2557 (official build))	2	0
host2	9440	82	Code: 82. DB::Exception: Database test_db already exists. (DATABASE_ALREADY_EXISTS) (version 24.11.1.2557 (official build))	1	0
__exception__
Code: 159. DB::Exception: Timeout exceeded: elapsed 181.022766128 seconds, maximum: 1. (TIMEOUT_EXCEEDED) (version 24.11.1.2557 (official build))
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining

I have a cluster with 2 hosts and 1 fake host (which doesn't exist). I expected to recieve the response with header X-ClickHouse-Exception-Code in both cases.

The reason for the question is a problem in requests (python lib) for the second example. I encounter an error for this request:

.venv/lib/python3.10/site-packages/requests/models.py", line 763, in generate
          raise ChunkedEncodingError(e)
      requests.exceptions.ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))

I’m encountering an issue with the response in the second case, where I'm not receiving the expected header. Could you please help me troubleshoot this problem? Any suggestions you might have would be greatly appreciated.

@CheSema
Copy link
Member Author

CheSema commented Dec 17, 2024

Clickhouse has started to send response to a client. The first HTTP line has been sent. The headers have been closed. The format writer TabSeparated could not inject exceptions in the response. The exception Timeout exceeded has occur during sending the data. No other choices than to break the HTTP invariants are left for clickhouse server, it has to make a client understand that the error has occur.
Before that PR the exception message was send along with the data. It was almost impossible to distinguish good result from the error. Now you get ChunkedEncodingError. Definitely not an OK result.

You could use formats that are able to write errors with the data. Like 'JSON' format. Or you could enable settings which enable bufferisation for HTTP queries.

@k-morozov
Copy link
Contributor

You could use formats that are able to write errors with the data. Like 'JSON' format

Unfortunately, it's not suitable for CREATE TABLE.

Now you get ChunkedEncodingError

How can we tell the difference between the problem when I have a ChunkedEncodingError from my second example and the problem with a ChunkedEncodingError due to other causes (for example, a network error)?

@CheSema
Copy link
Member Author

CheSema commented Dec 22, 2024

How can we tell the difference between the problem when I have a ChunkedEncodingError from my second example and the problem with a ChunkedEncodingError due to other causes (for example, a network error)?

You can't distinguish that cases only by exception on your client. They are both an error.

There is a possibility to extract same debug info from the last HTTP chunk. But that should not define your further actions. You have to treat it as an error. Like in case when a network error occur, the state of operation is unknown.

@larry-cdn77
Copy link
Contributor

larry-cdn77 commented Jan 1, 2025

Hello, I am still (version 25.1) seeing the final chunk in a response with exception (SELECT * FROM foo.bar query) – is that expected?

Screenshot 2025-01-01 at 17 01 12

@CheSema
Copy link
Member Author

CheSema commented Jan 2, 2025

Hello, I am still (version 25.1) seeing the final chunk in a response with exception (SELECT * FROM foo.bar query) – is that expected?

You have a HTTP header X-ClickHouse-Exception-Code. If than http header is sent than there is no need to break the HTTP protocol.

Check the comment how to check correctness of a response with HTTP:
#46426 (comment)

@larry-cdn77
Copy link
Contributor

larry-cdn77 commented Jan 2, 2025

You have a HTTP header X-ClickHouse-Exception-Code. If than http header is sent than there is no need to break the HTTP protocol.

Check the comment how to check correctness of a response with HTTP: #46426 (comment)

I see, server may not get a chance to send X-ClickHouse-Exception-Code

That explains the various tests that have max execution time set

The three-point test (code, header, transmission) in that comment would actually be useful as documentation – I'm happy to raise a PR for it

baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 14, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 17, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 17, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 19, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 20, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 20, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 20, 2025
baibaichen added a commit to baibaichen/gluten that referenced this pull request Feb 27, 2025
baibaichen added a commit to apache/incubator-gluten that referenced this pull request Feb 28, 2025
…les (#8847)

* fix for ClickHouse/ClickHouse#68800

* Refactor: Move gluten_test_util.h

* Refactor: write(DB::Block & block) => write(const DB::Block & block)

* [Refactor] Adding `formatSettings()`, so we needn't define friend class VectorizedParquetBlockInputFormat for VectorizedParquetRecordReader

* [Test] set path to "./" instead of default path "/"

* [Test] Add IcebergTest

* [Refactor] Remove move `BlockUtil::buildHeader` and rename to `toSampleBlock`

* [Feature] EqualityDeleteFileReader

* [Refactor] Move BaseReader and its child class to FileReader.cpp/.h

* [Feature] Add IcebergReader

* [Feature] Add EqualityDeleteActionBuilder to supprot notIn when there is only one delete column

* cmake fix

* style

* fix per review
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-improvement Pull request with some product improvements pr-synced-to-cloud The PR is synced to the cloud repo
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants