-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory limit (total) exceeded
during insert leads to partially write
#11546
Comments
Insertion of one block into table of MergeTree family is atomic. This is true in all ClickHouse versions. |
I didn't dive into Java Clickhouse driver. But I suppose that this code should insert in one block: try(Connection connection = dataSource.getConnection()) {
// create batch insert statement
PreparedStatement statement = connection.prepareStatement(this.insertSql);
populateInsertStatement(recordsAsJson, statement);
return Arrays.stream(statement.executeBatch()).sum();
} catch (SQLException e) {
// omitted
} We do inserts once per minute and a single block can be easily detected by the same value in We bumped into the problem described above only once when got out of memory error.. |
@dmitryikh Check that all 4 records belong to 1 partition (table partition by). Partitions break insert to several blocks. |
@den-crane , Yes! you right! I missed that. I'm going to close the issue. |
Hello, I have a similar issue, I am using the jdbc official driver to save 1 500K rows, PARTITION BY the starting date (and it's a two-days data set) The code :
I tried to set Any idea ? Do I have to create a new issue for that ? |
ok I fixed my issue by using the batch API of the jdbc driver instead of the writer one, I still don't understand the error log though (maybe an overhead when writting rows one by one) |
@RonanMorgan you can't set min_insert_block_size_rows / max_insert_block_size in server. They should be set as query properties in JDBC query. |
meet same error, the server version is 20.4.2。When I insert 400MB+ data into ONE partition of a merge tree table, the Memory limit(total) exception happen. The client has retry, but finally, we got more data in clickhouse then in datasource (hive). Hope your reponse, tks |
(you don't have to strictly follow this form)
Describe the bug
I use java clickhouse connector to insert data into Clickhouse. Once I inserted 4 rows and got the answer from the clickhouse:
(The memory seems to be consumed by another queries).
After that I found that 3 of 4 records was inserted and the one (last in ORDER BY sort) was not inserted.
It seems that clickhouse violates block insert atomicity in case of out of memory errors?
IMHO, there should be atomicity in block write - either all rows are inserted or none of rows.
The text was updated successfully, but these errors were encountered: