Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

execute the same SQL repeatedly got oom error #39163

Closed
aytrack opened this issue Nov 15, 2022 · 3 comments
Closed

execute the same SQL repeatedly got oom error #39163

aytrack opened this issue Nov 15, 2022 · 3 comments
Assignees
Labels

Comments

@aytrack
Copy link
Contributor

aytrack commented Nov 15, 2022

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

use test;
drop table if exists t1, t2;
create table t1(a int, index(a));
create table t2(a int, index(a));
insert t1 values(1), (2);
insert t2 values (1), (1), (2), (2);

set @@tidb_mem_quota_query=8000;
select /*+ inl_merge_join(t2)*/ * from t1 join t2 on t1.a = t2.a;  -- execute many times.
...
select /*+ inl_merge_join(t2)*/ * from t1 join t2 on t1.a = t2.a;

2. What did you expect to see? (Required)

always execute success

3. What did you see instead (Required)

got oom error after some times

TiDB root@127.0.0.1:test>  select /*+ inl_merge_join(t2)*/ * from t1 join t2 on t1.a = t2.a;
(1105, 'Out Of Memory Quota![conn_id=2199023255959]')

4. What is your TiDB version? (Required)

***************************[ 1. row ]***************************
tidb_version() | Release Version: v6.4.0
Edition: Community
Git Commit Hash: cf36a9ce2fe1039db3cf3444d51930b887df18a1
Git Branch: heads/refs/tags/v6.4.0
UTC Build Time: 2022-11-13 05:15:26
GoVersion: go1.19.2
Race Enabled: false
TiKV Min Version: 6.2.0-alpha
Check Table Before Drop: false
Store: unistore
@xuyifangreeneyes
Copy link
Contributor

xuyifangreeneyes commented Nov 15, 2022

I found that the issue happens after #38607 by binary search of commit history. cc @XuHuaiyu

@keeplearning20221
Copy link
Contributor

keeplearning20221 commented Nov 16, 2022

This meets the design expectations of chunk reuse.Chunk reuse will cache the previously applied chunks and distribute them to subsequent sql.The chunk requested in chunk reuse may be larger than the requested capacity (the maximum capacity is affected by tidb_max_chunk_size).
If there is very little memory, please execute set tidb_enable_reuse_chunk=OFF to disable the function

@xuyifangreeneyes
Copy link
Contributor

As @keeplearning20221 said, it is not a bug. So I close it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants