Skip to content

[fix](set) fix coredump of set op if total data size exceeds 4G#61471

Open
jacktengg wants to merge 2 commits intoapache:masterfrom
jacktengg:260318-fix-set
Open

[fix](set) fix coredump of set op if total data size exceeds 4G#61471
jacktengg wants to merge 2 commits intoapache:masterfrom
jacktengg:260318-fix-set

Conversation

@jacktengg
Copy link
Contributor

What problem does this PR solve?

Issue Number: close #xxx

Related PR: #xxx

Problem Summary:
Root Cause Analysis

核心原因:SetSinkOperatorX::sink() 中 build_block
被多次覆盖,导致哈希表中的旧条目成为悬空引用。

问题链路

  1. build_block 被覆盖

在 set_sink_operator.cpp:52-56:

if (eos || local_state._mutable_block.allocated_bytes() >= BUILD_BLOCK_MAX_SIZE) { // 4GB
build_block = local_state._mutable_block.to_block(); // 覆盖 build_block!
RETURN_IF_ERROR(_process_build_block(local_state, build_block, state));
local_state._mutable_block.clear();
}

当数据总量超过 BUILD_BLOCK_MAX_SIZE(4GB)时,这个 flush 会触发多次:

  • 第一次 flush(allocated_bytes >= 4GB时):build_block = batch1(假设包含 rows
    0..N1),哈希表存入 row_num = 0, 1, ..., N1
  • 第二次 flush(eos 时):build_block = batch2(新数据,rows 0..N2),batch1
    的数据被销毁。哈希表新增 row_num = 0, 1, ..., N2
  1. 哈希表只存 row_num,不存 block 引用

RowRefListWithFlags 继承自 RowRef,只存储 uint32_t row_num(join_op.h:46),没有 block
指针或 offset。

在 hash_table_set_build.h:39,构建时存入的是:Mapped {k},即行号 k。

  1. 输出阶段使用单一 build_block

在 set_source_operator.cpp:161-162:

auto& column = *build_block.get_by_position(idx->second).column;
local_state._mutable_cols[idx->first]->insert_from(column, it->row_num);

此时 build_block 是最后一次 flush 的 batch2。但哈希表中来自 batch1 的条目的 row_num
可能超出 batch2 的行数范围。

  1. 越界访问导致 SIGSEGV

当 batch1 的 row_num = X(X > batch2 的行数)被用于 insert_from(column, X) 时:

// column_string.h:180-197
const size_t size_to_append = src.offsets[X] - src.offsets[X - 1]; // 越界读取 → 垃圾值
const size_t offset = src.offsets[X - 1]; // 垃圾值
// ...
memcpy(..., &src.chars[offset], size_to_append); // 垃圾 offset → 访问未映射内存 →
SIGSEGV

Release note

None

Check List (For Author)

  • Test

    • Regression test
    • Unit Test
    • Manual test (add detailed scripts or steps below)
    • No need to test or manual test. Explain why:
      • This is a refactor/code format and no logic has been changed.
      • Previous test can cover this change.
      • No code files have been changed.
      • Other reason
  • Behavior changed:

    • No.
    • Yes.
  • Does this need documentation?

    • No.
    • Yes.

Check List (For Reviewer who merge this PR)

  • Confirm the release note
  • Confirm test cases
  • Confirm document
  • Add branch pick label

@Thearas
Copy link
Contributor

Thearas commented Mar 18, 2026

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.

Please clearly describe your PR:

  1. What problem was fixed (it's best to include specific error reporting information). How it was fixed.
  2. Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
  3. What features were added. Why was this function added?
  4. Which code was refactored and why was this part of the code refactored?
  5. Which functions were optimized and what is the difference before and after the optimization?

@jacktengg
Copy link
Contributor Author

run buildall

@jacktengg
Copy link
Contributor Author

/review

BiteTheDDDDt
BiteTheDDDDt previously approved these changes Mar 18, 2026
@github-actions github-actions bot added the approved Indicates a PR has been approved by one committer. label Mar 18, 2026
@github-actions
Copy link
Contributor

PR approved by at least one committer and no changes requested.

@github-actions
Copy link
Contributor

PR approved by anyone and no changes requested.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

Overall Assessment

The bug analysis is correct and the fix is logically sound. The old code had a critical flaw: when total _mutable_block size exceeded 4GB, build_block was overwritten via to_block() while the hash table still held row_num indices from the previous block. During the source phase, _add_result_columns() uses these stale indices to access the last build_block, causing out-of-bounds reads and SIGSEGV.

The fix correctly removes the intermediate flush and only materializes build_block once at eos, ensuring all hash table row_num values reference valid rows in a single contiguous block.

Critical Checkpoint Conclusions

1. Does the code accomplish its goal? Is there a test that proves it?
The fix is correct — it eliminates the dangling-reference crash. However, no regression test is added to cover the >4GB scenario or even a multi-flush scenario. While reproducing 4GB+ in CI is impractical, a functional test verifying correctness with moderate data sizes would strengthen confidence. At minimum, a test exercising INTERSECT/EXCEPT with enough data to validate the single-build-block invariant should be added.

2. Is this modification as small, clear, and focused as possible?
Yes. The change is minimal: remove the BUILD_BLOCK_MAX_SIZE constant, change if (eos || ...) to if (eos), and un-nest the eos-only finalization code. The diff is clean and focused.

3. Concurrency concerns?
No new concurrency issues. The sink operator processes data single-threaded per pipeline task. The probe_finished_children_dependency signaling remains unchanged.

4. Memory/lifecycle implications?
This is the primary concern with the fix approach. Removing the 4GB flush means _mutable_block can grow unbounded in memory until eos. The set operators have no spill support (_spillable=false, revocable_mem_size()=0). The existing get_reserve_mem_size() reports current (retrospective) allocation, not prospective growth. If the build side has many GB of data with unique rows, this will consume proportional memory with no relief mechanism.

That said, this is a pre-existing limitation — even before this PR, the hash table itself grows unbounded. The BUILD_BLOCK_MAX_SIZE flush didn't reduce peak memory since the hash table still holds all entries. So the memory concern is not worsened by this fix; it just changes where the memory is held (mutable block vs. the final build_block — both coexist at flush time in the old code anyway).

5. Are there functionally parallel code paths?
No. The set sink operator is the only path that builds the set hash table.

6. Test coverage?
All existing set operation tests use trivially small data (3-100 rows). No test covers the >4GB or even multi-batch accumulation scenario. A regression test should be added.

7. Performance considerations?
No performance regression. The old code did to_block() + _process_build_block() potentially twice (once at 4GB, once at eos). The new code does it exactly once at eos. If anything, this is slightly more efficient for large datasets.

8. Observability?
No changes needed. The existing _hash_table_size and _valid_element_in_hash_table counters remain.

9. Compatibility?
No incompatible changes. This is a purely internal execution-engine fix.

10. Other concerns?
The ColumnStr<UInt32> has a hard 4GB char-data limit (check_chars_length in column_string.h). If a single string column accumulates >4GB of character data in the build block, it will throw STRING_OVERFLOW_IN_VEC_ENGINE. This is a pre-existing limitation unrelated to this PR, but worth noting since the old 4GB flush might have incidentally avoided this for some workloads.

Verdict

The fix is correct and necessary. The only actionable concern is the missing test coverage.

@@ -84,26 +83,24 @@ Status SetSinkOperatorX<is_intersect>::sink(RuntimeState* state, Block* in_block
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Test Coverage] The fix correctly eliminates the dangling-reference crash, but no regression test is added. Consider adding a regression test that exercises INTERSECT/EXCEPT with enough data volume (or enough batches) to validate the single-build-block invariant holds. Even a moderate-sized test with multiple sink() calls before eos would help prevent regression.

Additionally, the uint32_t::max row-count guard at line 80 is never tested. A negative test triggering the NotSupported error path would be valuable.

@doris-robot
Copy link

TPC-H: Total hot run time: 27207 ms
machine: 'aliyun_ecs.c7a.8xlarge_32C64G'
scripts: https://github.com/apache/doris/tree/master/tools/tpch-tools
Tpch sf100 test result on commit aa401fbf0990cde23064963578d78f8964563a70, data reload: false

------ Round 1 ----------------------------------
orders	Doris	NULL	NULL	0	0	0	NULL	0	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	17629	4458	4292	4292
q2	q3	10644	805	521	521
q4	4675	384	254	254
q5	7561	1226	1012	1012
q6	176	175	148	148
q7	815	872	681	681
q8	9622	1516	1403	1403
q9	5208	4749	4688	4688
q10	6323	1919	1649	1649
q11	475	267	248	248
q12	739	582	469	469
q13	18055	2961	2181	2181
q14	230	239	221	221
q15	q16	754	726	682	682
q17	743	841	450	450
q18	6013	5376	5343	5343
q19	1190	999	630	630
q20	555	489	376	376
q21	4423	1870	1673	1673
q22	465	336	286	286
Total cold run time: 96295 ms
Total hot run time: 27207 ms

----- Round 2, with runtime_filter_mode=off -----
orders	Doris	NULL	NULL	150000000	42	6422171781	NULL	22778155	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	4834	4543	4735	4543
q2	q3	3901	4356	3944	3944
q4	896	1223	820	820
q5	4073	4641	4477	4477
q6	184	177	144	144
q7	1744	1655	1531	1531
q8	2510	2739	2592	2592
q9	7553	7447	7399	7399
q10	3731	4020	3686	3686
q11	573	439	430	430
q12	487	607	456	456
q13	2864	3215	2282	2282
q14	285	297	268	268
q15	q16	788	814	726	726
q17	1187	1291	1354	1291
q18	7084	6968	6677	6677
q19	911	958	942	942
q20	2061	2122	2004	2004
q21	4015	3426	3567	3426
q22	474	436	387	387
Total cold run time: 50155 ms
Total hot run time: 48025 ms

@doris-robot
Copy link

TPC-DS: Total hot run time: 169484 ms
machine: 'aliyun_ecs.c7a.8xlarge_32C64G'
scripts: https://github.com/apache/doris/tree/master/tools/tpcds-tools
TPC-DS sf100 test result on commit aa401fbf0990cde23064963578d78f8964563a70, data reload: false

query5	4326	640	519	519
query6	332	232	216	216
query7	4219	477	271	271
query8	368	249	237	237
query9	8731	2762	2755	2755
query10	518	398	359	359
query11	6991	5117	4879	4879
query12	181	143	129	129
query13	1283	465	342	342
query14	5745	3734	3494	3494
query14_1	2884	2863	2878	2863
query15	203	198	178	178
query16	991	477	472	472
query17	915	748	638	638
query18	2447	455	362	362
query19	215	219	192	192
query20	133	131	132	131
query21	219	135	109	109
query22	13281	14317	14577	14317
query23	16807	15790	15773	15773
query23_1	15764	15649	15404	15404
query24	7181	1621	1224	1224
query24_1	1255	1237	1264	1237
query25	549	504	420	420
query26	1243	258	145	145
query27	2786	479	301	301
query28	4462	1846	1847	1846
query29	867	605	478	478
query30	303	231	190	190
query31	1044	947	873	873
query32	83	71	73	71
query33	512	347	290	290
query34	891	880	533	533
query35	642	695	596	596
query36	1054	1113	981	981
query37	140	99	85	85
query38	2909	2898	2848	2848
query39	852	837	829	829
query39_1	777	793	783	783
query40	231	157	134	134
query41	64	61	59	59
query42	256	256	258	256
query43	243	256	241	241
query44	
query45	202	186	184	184
query46	879	990	622	622
query47	2098	2539	2040	2040
query48	309	322	238	238
query49	636	450	393	393
query50	685	270	212	212
query51	4062	4059	4069	4059
query52	264	271	251	251
query53	301	343	292	292
query54	306	272	274	272
query55	96	84	80	80
query56	322	336	319	319
query57	1932	1883	1713	1713
query58	293	276	275	275
query59	2795	2980	2741	2741
query60	340	343	340	340
query61	160	158	152	152
query62	625	592	532	532
query63	310	287	277	277
query64	5126	1283	1010	1010
query65	
query66	1475	460	360	360
query67	24455	24392	24247	24247
query68	
query69	413	315	293	293
query70	975	959	953	953
query71	360	310	306	306
query72	2782	2748	2661	2661
query73	555	554	334	334
query74	9658	9583	9350	9350
query75	2889	2777	2504	2504
query76	2308	1037	686	686
query77	392	409	334	334
query78	11023	11160	10497	10497
query79	1131	774	604	604
query80	1350	698	591	591
query81	544	263	225	225
query82	992	162	122	122
query83	339	271	251	251
query84	301	122	99	99
query85	902	506	443	443
query86	434	302	331	302
query87	3142	3117	3054	3054
query88	3654	2683	2675	2675
query89	425	386	354	354
query90	2022	185	177	177
query91	167	166	141	141
query92	81	74	66	66
query93	966	834	506	506
query94	647	348	296	296
query95	580	347	388	347
query96	643	518	235	235
query97	2460	2491	2384	2384
query98	244	224	223	223
query99	1013	967	935	935
Total cold run time: 250471 ms
Total hot run time: 169484 ms

@hello-stephen
Copy link
Contributor

BE UT Coverage Report

Increment line coverage 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 52.70% (19778/37531)
Line Coverage 36.23% (184740/509848)
Region Coverage 32.48% (142963/440212)
Branch Coverage 33.65% (62518/185779)

@hello-stephen
Copy link
Contributor

BE Regression && UT Coverage Report

Increment line coverage 100% (0/0) 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 71.59% (26299/36734)
Line Coverage 54.45% (276662/508134)
Region Coverage 51.59% (229201/444258)
Branch Coverage 53.10% (98908/186261)

@jacktengg
Copy link
Contributor Author

run buildall

@jacktengg
Copy link
Contributor Author

/review

@github-actions github-actions bot removed the approved Indicates a PR has been approved by one committer. label Mar 19, 2026
@jacktengg
Copy link
Contributor Author

run buildall

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

Overview

This PR fixes a coredump (SIGSEGV) in set operations (INTERSECT/EXCEPT) when the total string data size exceeds 4GB. The root cause was that the old code flushed the build block in chunks via a 4GB BUILD_BLOCK_MAX_SIZE threshold, causing build_block to be overwritten and invalidating hash table row indices that pointed into the previous block.

Fix Approach

The fix adopts the same pattern used by hash join:

  1. merge_ignore_overflow replaces merge to allow ColumnString (UInt32 offsets) to silently overflow during accumulation.
  2. convert_column_if_overflow converts overflowed ColumnString to ColumnString64 (UInt64 offsets) after materialization.
  3. insert_indices_from replaces append_data_by_selector in the source operator to support reading from ColumnString64.
  4. The intermediate 4GB flush is removed -- all data is accumulated into a single MutableBlock until eos.

Critical Checkpoint Conclusions

1. Goal & Correctness: The fix correctly addresses the root cause (dangling row indices from overwritten build_block). The code now uses a single build_block that persists for the entire lifecycle. Both unit tests and regression tests prove correctness for the >4GB case. Pass.

2. Modification scope: The change is focused and minimal -- 4 files changed in production code with clear purpose. Pass.

3. Concurrency: No new concurrency concerns. The set sink operator is single-threaded for the build phase. N/A.

4. Lifecycle management: No new lifecycle issues. The _mutable_block and build_block lifecycle is simplified (single accumulation + single materialization). Pass.

5. Configuration items: No new config items added. N/A.

6. Incompatible changes: No format/protocol changes. N/A.

7. Parallel code paths: Both is_intersect=true and is_intersect=false template instantiations are covered. The pattern mirrors the hash join approach. Pass.

8. Special conditional checks: The MutableBlock::empty() initialization guard follows the established hash join pattern. Pass.

9. Test coverage: Good. Both INTERSECT and EXCEPT unit tests exercise the >4GB path. Regression test covers EXCEPT/INTERSECT with subset and self-join scenarios. Pass (minor issue noted below).

10. Observability: No new critical paths requiring extra observability. Existing timers and counters are preserved. Pass.

11. Transaction/persistence: N/A.

12. Data writes: N/A.

13. FE-BE variable passing: N/A.

14. Performance: Removing the intermediate flush is a net simplification. The merge_ignore_overflow approach avoids the overhead of checking check_chars_length() on every merge, which is a minor improvement. The convert_column_if_overflow has a one-time O(n) cost at eos. Hash table build and probe paths correctly handle ColumnString64 via virtual dispatch and explicit is_column_string64() checks. No performance concerns.

15. Other issues:

  • The regression test drops the table at the end (line 101), which contradicts the test standard: "After completing tests, do not drop tables; instead drop tables before using them in tests, to preserve the environment for debugging." The table is already dropped before creation (line 22), which is correct. The final DROP TABLE at line 101 should be removed.
  • The std::move(*in_block) in merge_ignore_overflow(std::move(*in_block)) is misleading -- merge_impl_ignore_overflow does not actually move data from the source (it copies via insert_range_from_ignore_overflow which takes const IColumn&). This is harmless but could confuse future readers. However, the hash join uses the same pattern (hashjoin_build_sink.cpp:827), so this is consistent with existing code.

Verdict

The fix is correct and well-tested. The approach follows the established hash join pattern for handling >4GB string data. One minor test standards issue flagged inline.

"""

sql """ DROP TABLE IF EXISTS test_set_op_large_string """
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per Doris test standards: "After completing tests, do not drop tables; instead drop tables before using them in tests, to preserve the environment for debugging." The table is already correctly dropped-before-create at line 22. This final DROP TABLE should be removed to preserve the table for post-failure debugging.

@doris-robot
Copy link

TPC-H: Total hot run time: 26989 ms
machine: 'aliyun_ecs.c7a.8xlarge_32C64G'
scripts: https://github.com/apache/doris/tree/master/tools/tpch-tools
Tpch sf100 test result on commit ea10e7905515d2357e2bfdf1e9e434f7ee8cd1d7, data reload: false

------ Round 1 ----------------------------------
orders	Doris	NULL	NULL	0	0	0	NULL	0	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	17597	4455	4279	4279
q2	q3	10639	782	519	519
q4	4690	353	256	256
q5	7561	1202	1039	1039
q6	180	175	147	147
q7	790	851	685	685
q8	9303	1488	1389	1389
q9	4894	4740	4700	4700
q10	6255	1909	1642	1642
q11	442	253	237	237
q12	719	586	470	470
q13	18044	2889	2188	2188
q14	233	239	212	212
q15	q16	727	732	667	667
q17	718	844	438	438
q18	5957	5365	5313	5313
q19	1116	978	617	617
q20	553	500	388	388
q21	4363	1843	1418	1418
q22	350	385	395	385
Total cold run time: 95131 ms
Total hot run time: 26989 ms

----- Round 2, with runtime_filter_mode=off -----
orders	Doris	NULL	NULL	150000000	42	6422171781	NULL	22778155	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	4807	4562	4604	4562
q2	q3	3832	4367	3825	3825
q4	872	1211	780	780
q5	4063	4441	4340	4340
q6	222	177	145	145
q7	1750	1701	1556	1556
q8	2518	2699	2577	2577
q9	7675	7425	7461	7425
q10	3748	4217	3634	3634
q11	525	450	451	450
q12	488	602	451	451
q13	2648	3145	2345	2345
q14	280	293	271	271
q15	q16	722	739	700	700
q17	1356	1388	1364	1364
q18	7247	6845	6702	6702
q19	915	900	950	900
q20	2041	2183	2254	2183
q21	3987	3540	3478	3478
q22	446	442	405	405
Total cold run time: 50142 ms
Total hot run time: 48093 ms

@doris-robot
Copy link

BE UT Coverage Report

Increment line coverage 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 52.70% (19780/37532)
Line Coverage 36.24% (184795/509870)
Region Coverage 32.49% (143031/440260)
Branch Coverage 33.66% (62542/185793)

@doris-robot
Copy link

TPC-H: Total hot run time: 26650 ms
machine: 'aliyun_ecs.c7a.8xlarge_32C64G'
scripts: https://github.com/apache/doris/tree/master/tools/tpch-tools
Tpch sf100 test result on commit ea10e7905515d2357e2bfdf1e9e434f7ee8cd1d7, data reload: false

------ Round 1 ----------------------------------
orders	Doris	NULL	NULL	0	0	0	NULL	0	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	17636	4467	4271	4271
q2	q3	10646	740	512	512
q4	4671	356	253	253
q5	7554	1198	1021	1021
q6	175	174	145	145
q7	773	830	669	669
q8	9306	1435	1285	1285
q9	4892	4804	4708	4708
q10	6283	1905	1640	1640
q11	454	252	248	248
q12	742	593	469	469
q13	18042	2988	2180	2180
q14	230	224	212	212
q15	q16	766	739	659	659
q17	722	798	464	464
q18	5862	5448	5231	5231
q19	1107	974	598	598
q20	536	477	375	375
q21	4476	1816	1401	1401
q22	330	309	417	309
Total cold run time: 95203 ms
Total hot run time: 26650 ms

----- Round 2, with runtime_filter_mode=off -----
orders	Doris	NULL	NULL	150000000	42	6422171781	NULL	22778155	NULL	NULL	2023-12-26 18:27:23	2023-12-26 18:42:55	NULL	utf-8	NULL	NULL	
============================================
q1	4746	4664	4620	4620
q2	q3	4012	4449	3820	3820
q4	853	1203	790	790
q5	4065	4382	4552	4382
q6	185	168	137	137
q7	1778	1681	1528	1528
q8	2481	2685	2514	2514
q9	7546	7503	7352	7352
q10	3753	3950	3644	3644
q11	538	447	428	428
q12	520	618	448	448
q13	2765	3597	2378	2378
q14	293	317	293	293
q15	q16	742	803	728	728
q17	1157	1403	1335	1335
q18	7385	6686	6683	6683
q19	905	868	917	868
q20	2095	2151	2073	2073
q21	3969	3461	3314	3314
q22	452	429	378	378
Total cold run time: 50240 ms
Total hot run time: 47713 ms

@doris-robot
Copy link

TPC-DS: Total hot run time: 169871 ms
machine: 'aliyun_ecs.c7a.8xlarge_32C64G'
scripts: https://github.com/apache/doris/tree/master/tools/tpcds-tools
TPC-DS sf100 test result on commit ea10e7905515d2357e2bfdf1e9e434f7ee8cd1d7, data reload: false

query5	4328	635	512	512
query6	330	220	199	199
query7	4208	484	261	261
query8	342	239	225	225
query9	8678	2684	2630	2630
query10	517	398	351	351
query11	6961	5146	4985	4985
query12	188	134	125	125
query13	1263	475	354	354
query14	5784	3824	3494	3494
query14_1	2833	2814	2811	2811
query15	205	199	178	178
query16	982	468	456	456
query17	921	757	631	631
query18	2466	464	369	369
query19	220	215	191	191
query20	137	129	128	128
query21	214	135	110	110
query22	13193	14109	15058	14109
query23	16256	15916	15780	15780
query23_1	15773	16031	15954	15954
query24	7317	1649	1225	1225
query24_1	1236	1215	1233	1215
query25	608	449	403	403
query26	1245	261	145	145
query27	2785	509	302	302
query28	4455	1810	1821	1810
query29	864	550	487	487
query30	306	227	193	193
query31	1005	986	887	887
query32	84	71	67	67
query33	522	339	272	272
query34	890	867	536	536
query35	648	687	608	608
query36	1052	1128	985	985
query37	133	96	83	83
query38	3043	2951	2871	2871
query39	878	833	816	816
query39_1	795	796	796	796
query40	232	148	136	136
query41	63	62	58	58
query42	265	266	264	264
query43	238	258	229	229
query44	
query45	203	194	182	182
query46	881	999	593	593
query47	2633	2500	2064	2064
query48	312	314	248	248
query49	654	460	386	386
query50	682	272	224	224
query51	4157	4110	4026	4026
query52	271	272	267	267
query53	291	336	288	288
query54	306	266	269	266
query55	93	90	81	81
query56	323	323	319	319
query57	1933	1866	1650	1650
query58	288	271	275	271
query59	2787	2940	2751	2751
query60	341	339	327	327
query61	153	154	155	154
query62	631	592	547	547
query63	330	288	272	272
query64	5106	1303	994	994
query65	
query66	1463	456	348	348
query67	24588	24533	24378	24378
query68	
query69	412	314	291	291
query70	997	937	925	925
query71	340	314	307	307
query72	3045	2866	2718	2718
query73	540	548	322	322
query74	9701	9638	9423	9423
query75	3024	2771	2471	2471
query76	2291	1024	671	671
query77	376	373	322	322
query78	11396	11454	10769	10769
query79	2652	767	588	588
query80	1755	633	559	559
query81	554	268	233	233
query82	1016	153	123	123
query83	330	263	245	245
query84	297	114	103	103
query85	900	492	451	451
query86	416	312	293	293
query87	3239	3088	3002	3002
query88	3566	2670	2646	2646
query89	427	367	351	351
query90	2027	170	163	163
query91	169	162	139	139
query92	80	76	66	66
query93	1141	839	494	494
query94	643	315	290	290
query95	586	394	317	317
query96	639	532	228	228
query97	2479	2489	2394	2394
query98	236	222	229	222
query99	1004	981	912	912
Total cold run time: 252879 ms
Total hot run time: 169871 ms

@hello-stephen
Copy link
Contributor

BE Regression && UT Coverage Report

Increment line coverage 100% (0/0) 🎉

Increment coverage report
Complete coverage report

Category Coverage
Function Coverage 73.34% (26942/36735)
Line Coverage 56.77% (288465/508156)
Region Coverage 53.90% (239495/444306)
Branch Coverage 55.75% (103842/186275)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants