New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run 200millon rows group crushed #5641
Comments
Can you please try with 0.6.1? |
sorry,my version is 0.6.1, I write the wrong ver.
connect my,duckdb file, or read parquet file, both are killed.
|
Can you please try to make this bug report reproducible, e.g. by uploading the parquet file somewhere? Thanks |
The duckdb has 3.1gb,the parquet larger 4.1gb,I don't know how to share you this big file? |
Well you could try to reduce the dataset to make it smaller (e.g. by dropping columns not used in the query)? |
The “killed” indicates to me this is most likely the system running out of memory and triggering the OOM killer. Note that the database size is the compressed size, the group by requires the entire uncompressed result of the query to fit in memory. I would also try lowering the memory limit - I have seen the OOM killer get triggered on Linux systems before the actual system RAM limit was reached, particularly in setups with multiple NUMA nodes. |
I tried little rows, this sql no problem.
|
|
This comment was marked as abuse.
This comment was marked as abuse.
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue was closed because it has been stale for 30 days with no activity. |
What happens?
Killed
I checked the parquet file is 3.1GB, and the memory_limit is 53.9G in duckdb.
To Reproduce
OS:
centos7
DuckDB Version:
0.6.1
DuckDB Client:
Python
Full Name:
Changzhen Wang
Affiliation:
linezonedata.com
Have you tried this on the latest
master
branch?Have you tried the steps to reproduce? Do they include all relevant data and configuration? Does the issue you report still appear there?
The text was updated successfully, but these errors were encountered: