Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix](pipeline) Enable pipeline explicitly in the plan shape check cases. #20221

Merged
merged 2 commits into from
May 31, 2023

Conversation

Kikyou1997
Copy link
Contributor

@Kikyou1997 Kikyou1997 commented May 30, 2023

Proposed changes

The recent addition of a new feature in the optimizer in the master branch requires the pipeline to be enabled explicitly for it to be activated. The state of the pipeline being enabled or disabled leads to differences in the execution plan. Currently, some test cases that check the plans do not explicitly set the pipeline status. As a result, when running the PR with the pipeline enabled, these cases fail. This PR to explicitly disable the pipeline in these cases. For the rest of the afternoon, please ignore the failures in these cases that are unrelated to the optimizer.

Further comments

If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...

@Kikyou1997
Copy link
Contributor Author

run buildall

@Kikyou1997
Copy link
Contributor Author

run buildall

@github-actions
Copy link
Contributor

PR approved by at least one committer and no changes requested.

@github-actions github-actions bot added approved Indicates a PR has been approved by one committer. reviewed labels May 31, 2023
@github-actions
Copy link
Contributor

PR approved by anyone and no changes requested.

@englefly englefly merged commit d93ff5d into apache:master May 31, 2023
gnehil pushed a commit to gnehil/doris that referenced this pull request Jun 2, 2023
…ses. (apache#20221)

enable pipeline explicitly in tpcds plan shape check
SWJTU-ZhangLei added a commit to SWJTU-ZhangLei/incubator-doris that referenced this pull request Jul 25, 2023
…cloud-dev (630da93 20230712) (apache#2065)

This is a HUGE and UNSTABLE merge. It may cause compile error, crash/core-dump, and death.

```
                            20230606               20230707         20230712
                           61b71de3a6                .             630da93a1b
                               .                     .                 .
doris-2.0              --------o---------------------.-----------------.--------
                                \__                  .                 .
                                   \                 .                 .
dev-merge-2.0          ------o------o---o--o-o-------o----------o------.--------
                            /                       /            \     .
                           /                    ___/              \___ .
                          /                    /                      \.
selectdb-cloud-dev     --o--------------------o------------------------o--------

```

=============================================================


* [feature](function) add json->operator convert to json_extract (#19899)

* [Fix](single replica load) fix indices_size key not found core (#20047)

* [fix](fe)ordering exprs should be substituted in the same way as select part (#20091)

* [improvement](exec) Refactor the partition sort node to send data in pipeline mode (#20128)

before: the node will wait to retrieve all data from child, then send data to parent.
now: for data from child that does not require sorting, it can be sent to parent immediately.

* [regression](test) fix test case failed in pipeline mode (#20139)

* [improvement](community) simplify the pr template and modify pr labeler (#20127)

* [typo](doc)spark load add task timeout parameter #20115

* [refactor-WIP](TaskWorkerPool) add specific classes for PUSH, PUBLIC_VERION, CLEAR_TRANSACTION tasks (#19822)

* [fix](ldap) fix ldap related errors (#19959)

1. fix ldap user show grants return null pointer exception;
2. fix ldap user show databases return no authority db;
3. ldap authentication supports catalog level;

* [enhance](FileWriter)enhance s3 file writer bvar to avoid adding abort bytes (#20138)

* don't add each time upload or it would add aborted bytes

* alloca memory

* [Enhancement](alter inverted index) Improve alter inverted index performance with light weight add or drop inverted index (#19063)

* [typo](docs)Best usage document correction. #20142

* [refactor-WIP](TaskWorkerPool) add specific classes for ALTER_TABLE, CLONE, STORAGE_MEDIUM_MIGRATE task (#20140)

* [Improve](data-type) Clean datatype uselesscode (#20145)

* fix struct_export out data

* delete useless code with data type

* [chore](toolchain) change doris default toolchain to clang (#20146)

GCC is very slow during build and link. Change to clang as we discussed many times.


Co-authored-by: yiguolei <yiguolei@gmail.com>

* [Fix](load)Make insert timeout accurate in `show load` statistics  (#20068)

* [enhance](PrefetchReader) abort load task when data size returned by S3 is smaller than requested (#19947)

We encountered one confusing situation where buffered reader were trapped in one endless loop when calling readat. Then we found out that it was all due to the return data size is less than requested.
As the following picture shows, the actual data size is about 2M, and when we called readat it only retrieved about 1MB.

* [security] Don't print password in BaseController (#18862)

* [fix](tablet_manager_lock) fix create tablet timeout #20067 (#20069)

* [fix] (clone)  fix drop biggest version replica during reblance step (#20107)

* add check for rebalancer choose deleted replica

* impr a compare

* [enhancement](load) add some profile items for load (#20141)

* [Improvement](topn) prevent memory usage of key topn increasing unlimited (#19978)

* [Feature](Nereids) support advanced materialized view (#19650)

Increase the functionality of advanced materialized view

This feature already supported by legacy planner with PR #19650

This PR implement it in Nereids. This PR implement the features as below:
1. Support multiple columns in aggregate function.  eg: select sum(c1 + c2) from t1;
2. Supports complex expressions.  eg: select abs(c1), sum(abc(c1+1) + 1) from t1;

TODO:
1. Support adding where in materialized view

* [refactor](exec) replace the single pointer with an array of 'conjuncts' in ExecNode (#19758)

Refactoring the filtering conditions in the current ExecNode from an expression tree to an array can simplify the process of adding runtime filters. It eliminates the need for complex merge operations and removes the requirement for the frontend to combine expressions into a single entity.

By representing the filtering conditions as an array, each condition can be treated individually, making it easier to add runtime filters without the need for complex merging logic. The array can store the individual conditions, and the runtime filter logic can iterate through the array to apply the filters as needed.

This refactoring simplifies the codebase, improves readability, and reduces the complexity associated with handling filtering conditions and adding runtime filters. It separates the conditions into discrete entities, enabling more straightforward manipulation and management within the execution node.

* [fix](executor) Fixed an error with cast as time. #20144

before

mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 00:00:00                      |
+-------------------------------+
after

mysql [(none)]>select cast("10:10:10" as time);
+-------------------------------+
| CAST('10:10:10' AS TIMEV2(0)) |
+-------------------------------+
| 10:10:10                      |
+-------------------------------+
In the past, we supported this syntax.

mysql [(none)]>select cast("2023:05:01 13:14:15" as time);
+------------------------------------------+
| CAST('2023:05:01 13:14:15' AS TIMEV2(0)) |
+------------------------------------------+
| 13:14:15                                 |
+------------------------------------------+
However, "10:10:10" is also a valid datetime.

mysql [(none)]>select cast("10:10:10" as datetime);
+-----------------------------------+
| CAST('10:10:10' AS DATETIMEV2(0)) |
+-----------------------------------+
| 2010-10-10 00:00:00               |
+-----------------------------------+
So here, the order of parsing has been adjusted.

* [Fix](inverted index) fix memeory leak when inverted index writer do not finish correctly (#20028)

* [Fix](inverted index) fix memeory leak when inverted index writer do not finish correctly

* [Update](inverted index) use smart pointer to avoid memeory leak

* [Chore](format) code format

---------

Co-authored-by: airborne12 <airborne12@gmail.com>

* [fix](dynamic_partition) fix dynamic partition not work when drop and  recover olap table (#19031)

when olap table is dynamic partition enable, if drop and recover olap table, the table should be added to DynamicPartitionScheduler again

---------

Co-authored-by: caiconghui1 <caiconghui1@jd.com>

* [Feature](agg_state) support agg_state combinators (#19969)

support agg_state combinators state/merge/union

* [Chore](build) add non-virtual-dtor, remove no-embedded-directive/no-zero-length-array (#20118)

add non-virtual-dtor, remove no-embedded-directive/no-zero-length-array

* [Fix](multi catalog)Fix Iceberg table missing column unique id bug (#20152)

This pr is to fix the bug introduced by PR #19909
The bug failed to set column unique id for iceberg table, which will cause the query result for iceberg table are all NULL.

```
mysql> select * from iceberg_partition_lower_case_parquet limit 1;
+------+------+------+---------+
| k1   | k2   | k3   | city    |
+------+------+------+---------+
| NULL | NULL | NULL | Beijing |
+------+------+------+---------+
1 row in set (0.60 sec)
```
After fix:
```
mysql> select * from iceberg_partition_lower_case_parquet limit 1;
+------+------+------+---------+
| k1   | k2   | k3   | city    |
+------+------+------+---------+
|    1 | k2_1 | k3_1 | Beijing |
+------+------+------+---------+
1 row in set (0.35 sec)
```

* [typo](doc)correct the misspelled word and the improper word (#20149)

* [Conf](decimalv3) enable decimalv3 by default

* [feat](stats) delete data size stat and Made task timeout configurable (#20090)

1. Delete the stats for data size, since it would cost too much time but useless
2. Make task time out configurable since when it's common to analyze a quite huge table that the default 10 min is not suitable

* [FIX](mysql_writer) fix mysql output binary object works  (#20154)

* fix struct_export out data

* fix mysql writer output with binary true

* [fix](partial update) use correct tablet schema for rowset writer in publish task (#20117)

* [typo](config)Remove FE config max_conn_per_user (#20122)



---------

Co-authored-by: Yijia Su <suyijia@selectdb.com>

* [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema (#20037)

* [Improve](performance) introduce SchemaCache to cache TabletSchame & Schema

1. When the system is under high-concurrency load with wide table point queries, the frequent memory allocation and deallocation of Schema become evident system bottlenecks. Additionally, the initialization of TabletSchema and Schema also becomes a CPU hotspot.Therefore, the introduction of a SchemaCache is implemented to cache these resources for reuse.

2. Make some variables wrapped with std::unique<unique_ptr>

Performance:
| 状态              | QPS | 平均响应时间 (avg) | P99 响应时间 |
|------------------|-----|------------------|-------------|
| 开启 SchemaCache | 501 | 20ms             | 34ms        |
| 关闭 SchemaCache | 321 | 31ms             | 61ms        |

* handle schema change with schema version

* remove useless header

* rebase

* [Chore](gensrc) remove gen_vector_functions.py #20150

* [fix](function) Fix VcompoundPred execute const column #20158

recurrent:

./run-regression-test.sh  --run -suiteParallel 1 -actionParallel 1 -parallel 1 -d query_p0/sql_functions/window_functions

select      /*+ SET_VAR(query_timeout = 600) */ subq_0.`c1` as c0 from    (select           ref_1.`s_name` as c0,          ref_1.`s_suppkey` as c1,          ref_1.`s_address` as c2,          ref_1.`s_address` as c3       from          regression_test_query_p0_sql_functions_window_functions.tpch_tiny_supplier as ref_1       where (ref_1.`s_name` is NULL)          or (ref_1.`s_acctbal` is not NULL)) as subq_0 where (subq_0.`c3` is NULL)    or (subq_0.`c2` is not NULL)
reason:
FunctionIsNull and FunctionIsNotNull execute returns a const column, but their VectorizedFnCall::is_constant returns false, which causes problems with const handling when VCompoundPred::execute.

This pr converts const column to full column in VCompoundPred execute. In the future, there will be a more thorough solution to such problems.

* [Bug](function) fix equals implements not judge order by elements of function call expr (#20083)

fix equals implements not judge order by elements of function call expr
#19296

* [Enhancement](Nerieds) add switch for developing Nereids DML (#20100)

* [Chore](build) adjust some compile diagnostic (#20162)

* [BUG]storage_min_left_capacity_bytes default value has integer overflow #19943

* [fix](DECIMALV3)  Fix the error in DECIMALV3 when explicitly casting. (#19926)

before

mysql [test]>select cast(1 as DECIMALV3(16, 2)) /  cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
|                                                      0.00 |
+-----------------------------------------------------------+


mysql [test]>select * from divtest;
+------+------+
| id   | val  |
+------+------+
|    3 | 5.00 |
|    2 | 4.00 |
|    1 | 3.00 |
+------+------+

mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
|                                   0 |
|                                   0 |
|                                   0 |
+-------------------------------------+
after

mysql [test]>select cast(1 as DECIMALV3(16, 2)) /  cast(3 as DECIMALV3(16, 2));
+-----------------------------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / CAST(3 AS DECIMALV3(16, 2)) |
+-----------------------------------------------------------+
|                                                      0.33 |
+-----------------------------------------------------------+

mysql [test]>select cast(1 as decimalv3(16,2)) / val from divtest;
+-------------------------------------+
| CAST(1 AS DECIMALV3(16, 2)) / `val` |
+-------------------------------------+
|                            0.250000 |
|                            0.200000 |
|                            0.333333 |
+-------------------------------------+
This is because in the previous code, the constant 1.000 would be transformed into 1.

remove "ReduceType

* [typo](docs) fix fqdn doc error (#20171)

* [Feature](inverted index) add parser_mode properties for inverted index parser (#20116)

We add parser mode for inverted index, usage like this:
```
CREATE TABLE `inverted` (
  `FIELD0` text NULL,
  `FIELD1` text NULL,
  `FIELD2` text NULL,
  `FIELD3` text NULL,
  INDEX idx_name1 (`FIELD0`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "fine_grained") COMMENT '',
  INDEX idx_name2 (`FIELD1`) USING INVERTED PROPERTIES("parser" = "chinese", "parser_mode" = "coarse_grained") COMMENT ''
) ENGINE=OLAP
);
```

* [fix](p0 regression)Update hive docker test case result data (#20176)

Doris updated array type output format, using double quote for Strings.
Before, it was using single quote. So we need to update the case out file using double quote.

* [Fix](multi-catalog) Fix parquet bugs of #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'. (#20191)

Fix some parquet reader bugs which introduced by #19758 'replace the single pointer with an array of 'conjuncts' in ExecNode'.

* [Fix](multi-catalog) fix all nested type test which introduced by #19518(support insert-only transactional table). (#20194)

Fix `qt_nested_types_orc` in `test_tvf_p2` which introduced by #19518(support insert-only transactional table).

### Test case error
`qt_nested_types_orc` in `test_tvf_p2`
```
select count(array0), count(array1), count(array2), count(array3), count(struct0), count(struct1), count(map0)
            from hdfs(
            "uri" = "hdfs://172.21.16.47:4007/catalog/tvf/orc/all_nested_types.orc",
            "format" = "orc",
            "fs.defaultFS" = "hdfs://172.21.16.47:4007")
```

**Error Message:**
errCode = 2, detailMessage = (172.21.0.101)[INTERNAL_ERROR]Wrong data type for colum 'struct1'

* [feat](optimizer) Support CTE reuse (#19934)

Before this PR, new optimizer would inline CTE directly. However in many scenario a CTE could be referenced many times, such as in TPC-DS tests, for these cases materialize the result sets of CTE and reuse it would significantly agument performance. In our tests on tpc-ds related sqls, it would improve the performance by up to almost **4 times** than before.

We introduce belowing plan node in optimizer

1. CTEConsumer: which hold a reference to CTEProducer
2. CTEProducer: Plan defined by CTE stmt
3. CTEAnchor: the father node of CTEProducer, a CTEProducer could only be referenced from  corresponding CTEAnchor's right child.

A CTEConsumer would be converted to a inlined plan if corresponding CTE referenced less than or equal `inline_cte_referenced_threshold` (it's a session variable, by default is 1).


For SQL:

```sql
EXPLAIN REWRITTEN PLAN
WITH cte AS (SELECT col2 FROM t1)
SELECT * FROM t1 WHERE (col3 IN (SELECT c1.col2 FROM cte c1))
UNION ALL
SELECT * FROM t1 WHERE (col3 IN (SELECT c1.col2 FROM cte c1));
```

Rewritten plan before this PR:

```
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                       |
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalUnion ( qualifier=ALL, outputs=[col1#14, col2#15, col3#16], hasPushedFilter=false )                                                           |
| |--LogicalJoin[559] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#6 = col2#8)], otherJoinConjuncts=[] )      |
| |  |--LogicalProject[551] ( distinct=false, projects=[col1#4, col2#5, col3#6], excepts=[], canEliminate=true )                                       |
| |  |  +--LogicalFilter[549] ( predicates=(__DORIS_DELETE_SIGN__#7 = 0) )                                                                             |
| |  |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                    |
| |  +--LogicalProject[555] ( distinct=false, projects=[col2#20 AS `col2`#8], excepts=[], canEliminate=true )                                          |
| |     +--LogicalFilter[553] ( predicates=(__DORIS_DELETE_SIGN__#22 = 0) )                                                                            |
| |        +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                    |
| +--LogicalProject[575] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=false )                                       |
|    +--LogicalJoin[573] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#11 = col2#13)], otherJoinConjuncts=[] ) |
|       |--LogicalProject[565] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=true )                                  |
|       |  +--LogicalFilter[563] ( predicates=(__DORIS_DELETE_SIGN__#12 = 0) )                                                                         |
|       |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
|       +--LogicalProject[569] ( distinct=false, projects=[col2#24 AS `col2`#13], excepts=[], canEliminate=true )                                      |
|          +--LogicalFilter[567] ( predicates=(__DORIS_DELETE_SIGN__#26 = 0) )                                                                         |
|             +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
+------------------------------------------------------------------------------------------------------------------------------------------------------+

```

After this PR

```
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String                                                                                                                                       |
+------------------------------------------------------------------------------------------------------------------------------------------------------+
| LogicalUnion ( qualifier=ALL, outputs=[col1#14, col2#15, col3#16], hasPushedFilter=false )                                                           |
| |--LOGICAL_CTE_ANCHOR#-1164890733                                                                                                                    |
| |  |--LOGICAL_CTE_PRODUCER#-1164890733                                                                                                               |
| |  |  +--LogicalProject[427] ( distinct=false, projects=[col2#1], excepts=[], canEliminate=true )                                                    |
| |  |     +--LogicalFilter[425] ( predicates=(__DORIS_DELETE_SIGN__#3 = 0) )                                                                          |
| |  |        +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
| |  +--LogicalJoin[373] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#6 = col2#8)], otherJoinConjuncts=[] )   |
| |     |--LogicalProject[370] ( distinct=false, projects=[col1#4, col2#5, col3#6], excepts=[], canEliminate=true )                                    |
| |     |  +--LogicalFilter[368] ( predicates=(__DORIS_DELETE_SIGN__#7 = 0) )                                                                          |
| |     |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
| |     +--LOGICAL_CTE_CONSUMER#-1164890733#1038782805                                                                                                 |
| +--LogicalProject[384] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=false )                                       |
|    +--LogicalJoin[382] ( type=LEFT_SEMI_JOIN, markJoinSlotReference=Optional.empty, hashJoinConjuncts=[(col3#11 = col2#13)], otherJoinConjuncts=[] ) |
|       |--LogicalProject[379] ( distinct=false, projects=[col1#9, col2#10, col3#11], excepts=[], canEliminate=true )                                  |
|       |  +--LogicalFilter[377] ( predicates=(__DORIS_DELETE_SIGN__#12 = 0) )                                                                         |
|       |     +--LogicalOlapScan ( qualified=default_cluster:test.t1, indexName=t1, selectedIndexId=42723, preAgg=ON )                                 |
|       +--LOGICAL_CTE_CONSUMER#-1164890733#858618008                                                                                                  |
+------------------------------------------------------------------------------------------------------------------------------------------------------+

```

* Revert "[fix](DECIMALV3)  Fix the error in DECIMALV3 when explicitly casting. (#19926)" (#20204)

This reverts commit 8ca4f9306763b5a18ffda27a07ab03cc77351e35.

* [Bug](segment iterator) remove DCHECK for block row count (#20199)

CHECK rows count of block at segment iterator is not ready when `enable_common_expr_pushdown`

* [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER (#20166)

* [Improvement](runtimefilter) Build bloom filter according to the exact build size for IN_OR_BLOOM_FILTER

* [Enhance](array function) add support for DecimalV3 for array_enumerate_uniq() (#17724)

* [doc](fix)Modified the description about trino #20174

* [typo](docs) fix oceanbase jdbc catalog  error (#20197)

* [runtimeFilter](nereids) use runtime filter default size for debug purpose (#20065)

 use rf default size for debug

* [enhance](match) Support match query without inverted index (#19936)

* [Fix](multi-catalog) Fix q03 in `text_external_brown` regression test by handling correctly when text converter parsing error. (#20190)

Issue Number: close #20189

Fix `q03` in `text_external_brown` regression test by handling correctly when text converter parsing error.

* [regression](decimalv3) Fix output for P1 regression (#20213)

* [fix](test) fix p2 broker load (#20196)

* [fix](session-variable) fix set global var on non-master FE return error  (#20179)

* [fix](catalog) fix create catalog with resource replay issue and kerberos auth issue (#20137)

1. Fix create catalog with resource replay bug.
	If user create catalog using `create catalog hive with resource xxx`, when replaying edit log,
	there is a bug that resource may be dropped, causing NPE and FE will fail to start.

	In this PR, I add a new FE config `disallow_create_catalog_with_resource`, default is true.
	So that `with resource` will not be allowed, and it will be deprecated later.

	And also fix the replay bug to avoid NPE.

2. Fix issue when creating 2 hive catalogs to connect with and without kerberos authentication.

	When user create 2 hive catalogs, one use simple auth, the other use kerberos auth.
	The query may fail with error like: `Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.`

	So I add a default property for hive catalog: `"ipc.client.fallback-to-simple-auth-allowed" = "true"`.
	Which means this property will be added automatically when user creating hive catalog, to avoid such problem.

3. Fix calling `hdfsExists()` issue

	When calling `hdfsExists()` with non-zero return code, should check if it encounters error or is file not found.

3. Some code refactor

	Avoid import `org.apache.parquet.Strings`

* [fix](Nereids) filter should not push through union to OneRowRelation (#20132)

## Problem summary
When we want to push the filter through the union. We should check whether the union's children are `OneRowRelation` or not. If there are some `OneRowRelation`, we shouldn't push down the filter to that part

Before this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a    | b    |
+------+------+
|    1 |    2 |
|    3 |    3 |
+------+------+
2 rows in set (0.01 sec)
```

After this PR
```
mysql> select * from (select 1 as a, 2 as b union all select 3, 3) t where a = 1;
+------+------+
| a    | b    |
+------+------+
|    1 |    2 |
+------+------+
1 row in set (0.38 sec)
```

* [Feature-WIP](inverted index) support phrase for inverted index writer (#20193)

* [opt](Nereids) refactor the PartitionTopN (#20102)

Do some small refactoring for the `PartitionTopN` and also address the left comment in #18784

* [Feature](Inverted index) add MATCH_ PHRASE query (#20156)

* support query queue (#20048)

support query queue (#20048)

* [Enchancement](execute) make assert_cast can output derived class name (#20212)

before:
F0530 11:02:41.989699 1154607 assert_cast.h:54] Bad cast from type:doris::vectorized::IDataType const* to doris::vectorized::DataTypeAggState const*

after:
F0530 11:24:28.390286 1292475 assert_cast.h:46] Bad cast from type:doris::vectorized::DataTypeNullable* to doris::vectorized::DataTypeAggState const*

* [Improve](CI)Check PR approve status (#20172)

After discussion in the doris community @apache/doris-committers , we limit the PR to be merged only after at least two people approve it.↳

We can try to run it for a while first, and if everyone gives good feedback, we can use this as a mandatory check.

Since the merge must be approved by at least one committer, we only need to judge whether there are two approves, and we don't need to care about the identity of the approve.
When there is a request change, if the other party is a committer, the committer dismiss is required when merging, which is enforced by github, so we don't need to care.

* [Feature](compaction) wip: single replica compaction (#19237)

Currently, compaction is executed separately for each backend, and the reconstruction of the index during compaction leads to high CPU usage. To address this, we are introducing single replica compaction, where a specific primary replica is selected to perform compaction, and the remaining replicas fetch the compaction results from the primary replica.

The Backend (BE) requests replica information for all peers corresponding to a tablet from the Frontend (FE). This information includes the host where the replica is located and the replica_id. By calculating hash(replica_id), the replica with the smallest hash value is responsible for executing compaction, while the remaining replicas are responsible for fetching the compaction results from this replica.
The compaction task producer thread, before submitting a compaction task, checks whether the local replica should fetch from its peer. If it should, the task is then submitted to the single replica compaction thread pool.
When performing single replica compaction, the process begins by requesting rowset versions from the target replica. These rowset_versions are then compared with the local rowset versions. The first version that can be fetched is selected.

* [test](regression) add regression test from materialized slot bug (#20207)

The test query includes the conversion of string types to other types, and the processing of materialized columns for nested subqueries, which is the regression test for bug fix(#18783)

* [Fix](multi catalog, nereids)Fix text file required slot bug (#20214)

required_slots in TFileScanRangeParams params for external hive table may be updated after FileQueryScanNode finalize. For text file, we need to use the origin required_slots in params so that the list could be updated later. Otherwise, query text file may get the following error:
[INTERNAL_ERROR]Unknown source slot descriptor, slot_id=3

* [feature](nereids) support the rewrite rule for push-down filter through sort  (#20161)

Support the rewrite rule for push-down filter through sort.
We can directly push-down the filter through sort without any conditions check.

Before this PR:
```
mysql> explain select * from (select * from t1 order by a) t2 where t2.b > 2;
+-------------------------------------------------------------+
| Explain String                                              |
+-------------------------------------------------------------+
| PLAN FRAGMENT 0                                             |
|   OUTPUT EXPRS:                                             |
|     a[#2]                                                   |
|     b[#3]                                                   |
|   PARTITION: UNPARTITIONED                                  |
|                                                             |
|   VRESULT SINK                                              |
|                                                             |
|   3:VSELECT                                                 |
|   |  predicates: b[#3] > 2                                  |
|   |                                                         |
|   2:VMERGING-EXCHANGE                                       |
|      offset: 0                                              |
|                                                             |
| PLAN FRAGMENT 1                                             |
|                                                             |
|   PARTITION: HASH_PARTITIONED: a[#0]                        |
|                                                             |
|   STREAM DATA SINK                                          |
|     EXCHANGE ID: 02                                         |
|     UNPARTITIONED                                           |
|                                                             |
|   1:VTOP-N                                                  |
|   |  order by: a[#2] ASC                                    |
|   |  offset: 0                                              |
|   |                                                         |
|   0:VOlapScanNode                                           |
|      TABLE: default_cluster:test.t1(t1), PREAGGREGATION: ON |
|      partitions=0/1, tablets=0/0, tabletList=               |
|      cardinality=1, avgRowSize=0.0, numNodes=1              |
+-------------------------------------------------------------+
30 rows in set (0.06 sec)
```

After this PR:
```
mysql> explain select * from (select * from t1 order by a) t2 where t2.b > 2;
+-------------------------------------------------------------+
| Explain String                                              |
+-------------------------------------------------------------+
| PLAN FRAGMENT 0                                             |
|   OUTPUT EXPRS:                                             |
|     a[#2]                                                   |
|     b[#3]                                                   |
|   PARTITION: UNPARTITIONED                                  |
|                                                             |
|   VRESULT SINK                                              |
|                                                             |
|   2:VMERGING-EXCHANGE                                       |
|      offset: 0                                              |
|                                                             |
| PLAN FRAGMENT 1                                             |
|                                                             |
|   PARTITION: HASH_PARTITIONED: a[#0]                        |
|                                                             |
|   STREAM DATA SINK                                          |
|     EXCHANGE ID: 02                                         |
|     UNPARTITIONED                                           |
|                                                             |
|   1:VTOP-N                                                  |
|   |  order by: a[#2] ASC                                    |
|   |  offset: 0                                              |
|   |                                                         |
|   0:VOlapScanNode                                           |
|      TABLE: default_cluster:test.t1(t1), PREAGGREGATION: ON |
|      PREDICATES: b[#1] > 2                                  |
|      partitions=0/1, tablets=0/0, tabletList=               |
|      cardinality=1, avgRowSize=0.0, numNodes=1              |
+-------------------------------------------------------------+
28 rows in set (0.40 sec)
```

* [Enhencement](JDBC Catalog) refactor jdbc catalog insert logic (#19950)

This PR refactors the old way of writing data to JDBC External Table & JDBC Catalog, mainly including the following tasks
1. Continuing the work of @BePPPower 's PR #18594, changing the logic of splicing Inster sql to operating off-heap memory and using preparedStatement.set to write data logic to complete
2. Supplement the support written by largeint type, mainly to adapt to Java.Math.BigInteger, which uses binary operations
3. Delete the splicing SQL logic in the JDBC External Table & JDBC Catalog related written code

ToDo: Binary type,like bit,binary, blob...

Finally, special thanks to @BePPPower , @AshinGau  for his work

Co-authored-by: Tiewei Fang <43782773+BePPPower@users.noreply.github.com>

* [regression](p0) fix test for `array_enumerate_uniq` (#20231)

* [Chore](log) Remove some verbose log && Change log level (#20236)

* [feature-wip](workload-group) Support setting user default workload group (#20180)

Issue Number: close #xxx

SET PROPERTY 'default_workload_group' = 'group_name';

* [fix](regression)Update external Brown test case out file. #20232

Update external Brown test case out file to match the new precision.

* [chore](third-party) Bump the version of hadoop_libs (#20250)

Fix the issues with the workflow Build Third Party Libraries. See https://github.com/apache/doris-thirdparty/actions/runs/5109407220/jobs/9184234534

* [docs](spark-doris-connector): modify the link of spark-doris-connector (#20159)

* [Enhancement](merge-on-write) optimize bloom filter for primary key index (#20182)

* [Bug](runtimefilter) Fix waiting for runtime filter (#20155)

* [fix](nereids) like function's nullable property should be PropagateNullable (#20237)

* [Fix](dynamic-partition) Try to avoid setting a zero-bucket-size partition. (#20177)

A fallback to avoid BE crash problem when partition's bucket size is 0, but not resolved.

* [chore](arm) support build with hadoop libhdfs on arm (#20256)

hadoop-3.3.4.3-for-doris already support build on arm

* [fix](pipeline) Enable pipeline explicitly in the plan shape check cases. (#20221)

enable pipeline explicitly in tpcds plan shape check

* [Fix](Planner)fix incorrect pattern when format pattern contains %x%v (#19994)

* [Enhancement] Change Create Resource Group Grammar (#20249)

* [enhancement](ldap) Support refresh ldap cache (#20183)

Support refreshing ldap cache:
refresh ldap all;
refresh ldap;
refresh ldap for user1;
Support for caching non-existent ldap users.
When logging in with a doris user that does not exist in the Ldap service after ldap is enabled, avoid accessing the ldap service every time in scenarios such as show databases; that require a lot of authentication.

* [Fix](Nereids) fold constant result is wrong on functions relative to timezone (#19863)

* [improvement](bitmap) Using set to store a small number of elements to improve performance (#19973)

Test on SSB 100g:

select lo_suppkey, count(distinct lo_linenumber) from lineorder group by lo_suppkey;
exec time: 4.388s

create materialized view:

create materialized view customer_uv as select lo_suppkey, bitmap_union(to_bitmap(lo_linenumber)) from lineorder group by lo_suppkey;
select lo_suppkey, count(distinct lo_linenumber) from lineorder group by lo_suppkey;
exec time: 12.908s

test with the patch, exec time: 5.790s

* [opt](nereids) generate in-bloom filter if target is local for pipeline mode (#20112)

update in-filter usage in pipeline mode:
1. if the target is local, we use in-bloom filter. Let BE choose in or bloom according to actual distinctive number
2. set default runtime_filter_max_in_num to 1024

* [fix](match_phrase) Fix the inconsistent query result for 'match_phrase' after creating index without support_phrase property (#20258)

if create inverted index without support_phrase property, remaining the match_phrase condition to filter by match function.

* [deps](aws) upgrade to 1.9.272 to fix non-compliant RFC3986 encoding (#20252)

* [Bug](memtable) fix a bug occurred when we were inserting data into duplicate table without keys (#20233)

* [fix](load_profile) fix rows stat and add close_wait in sink (#20181)

* [Fix](Nereids) bitmap type should not be used in comparison predicate (#19807)

When using nereids, if we use compare operator of bitmap type, an analyze exception need to be throwed.

like: 
select id from (select BITMAP_EMPTY() as c0 from expr_test) as ref0 where c0 = 1 order by id

Which c0 in subq0 is a bitmap type, this scenario is not supported right now.

* [fix](checksum) delete predicates might be inconsistent with rowset readers in checksum task (#20251)

The BlockReader capture rowsets and init delete_handler in different place. If there is a base compaction, it may result in obtaining inconsistent delete handlers. Therefore, place these two operations under the same lock.

* [fix](tvf) s3 tvf specify region and s3.region params failed (#19921)

* [Enhancement](merge-on-write) Performance optimization of calculations of delete bitmap between segments (#20153)

1. Use heap sort to find duplicated keys between segments and update the delete-bitmap. The old implementation traversed all keys in all segments, used each key to search for duplicates in earlier segments, and then marked them for deletion.

2. Trick: Each time the heap top is popped as a key1, the new heap top is key2, allowing for jumping directly from key1 to key2 instead of advancing iteratively.

3. Effect: This technique works well when there are many segments within the same rowset and the imported data is relatively ordered.

* [refactor](dynamic table) Make segment_writer unaware of dynamic schema, and ensure parsing is exception-safe. (#19594)

1. make ColumnObject exception safe
2. introduce FlushContext and construct schema at memtable flush stage to make segment independent from dynamic schema
3. add more test cases

* [fix](regression-test) add jdbc timeout (#20228)

In some cases ( or bugs), doris may returned query to jdbc, but jdbc can not recognized what doris sent back,
so hanged. To fix this, add a timeout of 30 minutes to jdbc connection.

* [fix][regression-test] set timeout of curl in regression test to avoid hanged when be crashed. (#20222)

Currently in regression-test, when a be crash, because curl does not set a timeout, suite-thread will get stuck.
To solve this, encapsulate the call to be into a function, set the timeout uniformly, and avoid getting stuck

* [fix](nereids)(planner) case when should return NullLiteral when all case result is NullLiteral (#20280)

* [bug](parse) fix can't create aggregate column with agg_state (#20235)

fix can't create aggregate column with agg_state

* [pipeline](load) support pipeline load (#20217)

* fix fe meta upgrade error (#20291)

Co-authored-by: yiguolei <yiguolei@gmail.com>

* [Fix](Nereids) fix some insert into select bugs (#20052)

fix 3 bugs:

1. failed to insert into a table with mv.
```sql
create table t (
    id int,
   c1 int,
   c2 int,
   c3 int
) duplicate key(id)
distributed by hash(id) buckets 4

create materialized view k12s3m as select id, sum(c1), max(c3) from t group by id;

insert into t select -4, -4, -4, 'd';
```
insert will rise exception because mv column is not handled. now we will add a target column and value as defineExpr.

2. failed to insert into a table with not all the columns.
```sql
insert into t(c1, c2) select c1, c2 from t
```
and t(id ukey, c1, c2, c3), will insert too many data, we fix it by change the output partitions.

3. failed to insert into a table with complex select.
the select statement has join or agg, fix the bug by the way similar to the one at 2nd bug.

* [fix](multi catalog)Fix nereids planner text format include extra column index bug (#20260)

Nereids planner include all columns index in TFileScanRangeParams, this may cause the column projection incorrect for
 text format table. Because csv reader use the column index position to split a line. Extra column index will cause get 
wrong split result. This PR is to reset the column index after Projection, remove the useless column index.

* [fix](regression) regression test test_bitmap_filter_nereids could not run (#20293)

* [feature](decimal)support  cast rounding half up  and div precision increment in decimalv3. (#19811)

* [improvement](Nereids): limit Memo groupExpression size. (#20272)

* [Improve](Scan) add a session variable to make scan run serial (#20220)

Parallel scanning can result in some read amplification, for example, select * from xx where limit 1 actually requires only one row of data. However, due to parallel scanning of multiple tablets, read amplification occurs, leading to performance bottlenecks in high-concurrency scenarios. This PR Adding a SessionVariable to enforce serial scanning can help mitigate this issue.

* [P2](test) Fix P2 output (#20311)

* [Fix](Nereids) Fix function test case unstable by adding order by (#20295)

Nereids function case do not have a order by clause, so the result will be unstable, so order by is added to ensure stability.

* [fix](docs)Correct the year and month format placeholder to lower case (#20210)

* [Bug](exec) push down no group by agg min cause error result (#20289)

sql """
CREATE TABLE t1_int (
num int(11) NULL,
dgs_jkrq bigint(20) NULL
) ENGINE=OLAP
DUPLICATE KEY(num)
COMMENT 'OLAP'
DISTRIBUTED BY HASH(num) BUCKETS 1
PROPERTIES (
"replication_allocation" = "tag.location.default: 1",
"storage_format" = "V2",
"light_schema_change" = "true",
"disable_auto_compaction" = "false",
"enable_single_replica_compaction" = "false"
);
"""
sql """insert into t1_int values(1,1),(1,2),(1,3),(1,4),(1,null);"""
qt_sql """
select min(dgs_jkrq) from t1_int;
"""

get the error result:4

after change we get the right result:1

* [fix](regression-test) fix multi-thread problem of regression-test #20322

* [bug](udaf) fix java-udaf test case failed with decimal (#20315)

java-udaf have some test case with decimal will be failed in P0, because the decimal of scale is not set correctly

* [enhancement](publish) print detailed info for failed publish (#20309)

* [feature-wip](duplicate_no_keys) Add some test cases of all the duplicate tables in test case tpcds_sf100_without_key_p2 and make them duplicate tables without keys (#20332)

* [fix](regression-test) variable's scope returned by curl (#20347)

* [fix](planner)Fix missing kw for workload #20319

1 add usage docment for Workload Group query queue;
2 Fix missing KW for workload, this may cause create workload group failed.

* [feature-wip](duplicate-no-keys) schame change support for duplicate no keys (#19326)

* [fix](nereids)dphyper join reorder may cache wrong project list for project node (#20209)

* [fix](nereids)dphyper join reorder may cache wrong project list for project node

* [pipeline](rpc) support closure reuse in pipeline exec engine (#20278)

* [Bug](runtime filter) fix NPE if runtime filter has no target (#20338)

* [typo](docs)Correct the getting started document (#20245)

* [enhancement](struct-type)support comment for struct field (#20200)

support comment for struct field

* [Fix](Nereids) should not gather data when sink (#20330)

* [Fix](multi-catalog) fix oss access issue with aws s3 sdk (#20287)

* [Docs](inverted index) update docs for inverted index parser_mode and match_phrase support (#20266)

* [Profile](exec) Remove unless profile in pipeline exec engine (#20337)

* [Bug](pipeline) Fix memory leak if query is canceled caused by memory limit (#20316)

* [typo](docs) fix release note 2.0 zh url (#20320)

* [refactor](stats) Persist status of analyze task to FE meta data (#20264)

1. In the past, we use a BE table named `analysis_jobs` to persist the status of analyze jobs/tasks, however there are many flaws such as, if BE crashed analyze job/task would failed however the status of analyze job/task couldn't get updated.
2. Support `DROP ANALYZE JOB [job_id]` to delete analyze job
3. Support `SHOW ANALYZE TASK STATUS [job_id] ` to  get the task status of specific job
4. Restrict the execute condition of auto analyze, only when  the  last execution of auto analyze job finished a while ago could be executed again
5. Support analyze whole DB

* [performance](load) support parallel memtable flush for unique key tables (#20308)

* [fix](olap) deletion statement with space conditions did not take effect (#20349)

Deletion statement like this:

delete from tb where k1 = '  ';
The rows whose k1's value is ' ' will not be deleted.

* [Feature](array-functions)improve array functions for array_last_index (#20294)

Now we just support array_first_index for lambda input , but no array_last_index

* [pipeline](fix) rm github_token, no need for it (#20360)

* [Improve](json-array) Support json array with nereids bool (#20248)

Support json array with nereids bool
now : 

```
set enable_nereids_planner=true;
mysql> SELECT json_array(1, "abc", NULL, TRUE, '10:00:00');
+----------------------------------------------+
| json_array(1, 'abc', NULL, TRUE, '10:00:00') |
+----------------------------------------------+
| [1,"abc",null,false,"10:00:00"]              |
+----------------------------------------------+
1 row in set (0.02 sec)
```
 
nereids boolean is "true"/"false" is not '0' /'1' , so we always get false

* [enhancement](txn) print commit backends when commit fails (#20367)

Print commit backends when a commit fails.

* [fix](Nereids) forbid unexpected expression on filter and fix two more bugs (#20331)

fix below bugs:
1. not check filter's expression, aggregate function, grouping scalar function and window expression should not appear in filter
2. show not change nullable of aggregate function when it is window function in window expression
3. bitmap and other metric types should not appear in order by or partition by of window expression

* [Optimize](Function) Add fast path for col like '%%' or col like '%' or regexp '\\.*' (#20143)

Add fast path for col like '%%' or col like '%' or regexp '\\.*'
(1) like about 34% speed up when use count() test
support col like '%%' , col like '%', col not like '%%' , col not like '%'

(2) regexp about 37% speed up when use count() test
support col regexp '\\.', col not regexp '\\.'

Q1: select count() From hits where url like '%';
Q2: select count() From hits where url regexp '\\.*';

* [fix](nereids) add fragment id on all PhysicalRelation (#20371)

fix "cannot find fragment id for scan" exception

* [chore](third-party) Bump the version of hadoop_libs (#20369)

Bump the version of hadoop_libs to build HDFS related libraries only.

* [chore](function) Refactor FunctionSet Initialization for Better Maintainability and Compilation Success (#20285)

In this PR, I have refactored the initialization of the FunctionSet. Previously, all the functions were in one large method which led to the generation of Java code that was too long. This posed a problem for the compiler, as the length of the method exceeded the limit imposed by the Java compiler.

To resolve this issue and improve the readability and manageability of our code, I have categorized these functions by type, and created dedicated initialization methods for each type. As such, our code is now not only more readable and understandable, but also each method is of a length that is acceptable to the compiler and can be compiled successfully.

Moreover, this change makes it easier for us to add new functions as we can directly locate the right category and add new functions there.

This is a significant change aimed at enhancing the maintainability and scalability of our code, while ensuring that our code can be successfully compiled.

* [Enhancement](tvf) Backends tvf supports authentication (#20333)

Add authentication for backends tvf.

* [typo](docs) Update the `help create` command display (#20357)

* [refactor](jdbc catalog) Refactor the JdbcClient code (#20109)

This PR does the following:

1. This PR is a substantial refactor of the JDBC client architecture. The previous monolithic JDBC client has been refactored into an abstract base class `JdbcClient`, and a set of database-specific subclasses (e.g., `JdbcMySQLClient`, `JdbcOracleClient`, etc.), and the JdbcClient required config, abstract into an object. This allows for improved modularity, easier addition of support for new databases, and cleaner, more maintainable code. This change is backward-compatible and does not affect existing functionality.
2. As a result of client refactoring, OceanBaseClient can automatically recognize the mode of operation as MySQL or Oracle, so we cancel the oceanbase_mode property in the Jdbc Catalog, but due to the cancellation of the property, When creating a single OceanBase Jdbc Table, the table type needs to be filled in as oceanbase(mysql mode) or oceanbase_oracle(oracle_mode). The above work is a change in the usage behavior, please note.
3. For the PostgreSQL Jdbc Catalog, I did two things:

      1.   The adaptation to MATERIALIZED VIEW and FOREIGN TABLE is added
      2.   Fixed reading jsonb, which had been incorrectly changed to json in a previous PR

4. fix some jdbc catalog test case
5. modify oceanbase jdbc doc

And,Thanks @wolfboys for the guidance

* [Docs](docs) Update BE http documents (#17604)

* [fix](match) fix match query with compound predicates return -6003 (#20361)

* [fix](workload-group) fix incorrect memoryLimitPercent value (#20377)

* [fix](Nereids) should not inherit child's limit and offset when generate exchange node (#20373)

in legacy planner, when we new exchange, it inherit its child's limit and offset.
but in Nereids, we should not do this. because if we need set limit or offset, we will set it manually.
In this PR, we use a new ctor of ExchangeNode to ensure not set limit or offset unexpected.

* [fix](match query) fix array column match query failed without inverted index (#20344)

* [Enchancement](function) optimize for padding function && add string length check on string op (#20363)

* [Bug](schema-change) make test_dup_mv_schema_change more stable #20379

make test_dup_mv_schema_change more stable

* [fix](regression) test_partial_update_with_row_column (#20279)

* [DOCS](data-types) remove old types (#20375)

* [typo](docs)clearly describe the rename syntax (#20335)

* [fix](dynamic_partition) fix dead lock when modify dynamic partition property for olap table (#20390)

Co-authored-by: caiconghui1 <caiconghui1@jd.com>

* [opt](MergedIO) optimize merge small IO, prevent amplified read (#20305)

Optimize the strategy of merging small IO to prevent severe read amplification, and turn off merged IO when file cache enabled.
Adjustable parameters:
```
// the max amplified read ratio when merging small IO
max_amplified_read_ratio=0.8
// the min segment size
file_cache_min_file_segment_size = 1048576
```

* [improvement](inverted index) skip write index on load and generate index on compaction (#20325)

* [fix](inverted index) fix transaction id changed when light index change (#20302)

* [typo](doc)Add a demo of export minio (#20323)

* [docs](workload-group) add user binding workload group docs (#20382)

* [docs](load-balancing):delete duplicate sentences and improve the documentation description (#20297)

* [Doc](statistics) supplement stats doc (regression test and automatic collection) (#20071)

* [docs](auth) forbid 127.0.0.1 passwd free login (#19096)

* [typo](doc)Update stream-load-manual.md (#20277)

Modify the sequential label

* [typo](doc)Update compilation-general.md (#20262)

Add some explanations about docker run parameter

* [typo](doc)Update compilation-general.md (#20261)

Add some explanations about docker run parameter

* [typo](doc)Update runtime-filter.md (#20292)

* [typo](doc)Remove the description of the BE configuration 'serialize_batch' which has been removed (#20163)

* [typo](doc) Fixed typos in hive.md (#19457)

* [fix](community) fix PR template (#20400)

* [pipeline](opt) Opt fragment instance prepare performance by thread pool (#20399)

* [Fix](lazy_open) fix lazy open commit info lose (#20404)

* [build](scripts) modify build-for-release.sh (#20398)

* [Fix](Planner)fix cast date/datev2/datetime to float/double return null. (#20008)

* [fix](Nereids) give clean error message when there are subquery in the on clause (#20211)

Add the rule for checking the join node in `analysis/CheckAnalysis.java` file. When we check the join node, we should check its' on clause. If there are some subquery expression, we should throw exception.

Before this PR
```
mysql> select a.k1 from baseall a join test b on b.k2 in (select 49);
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: nul
```

After this PR
```
mysql> select a.k1 from baseall a join test b on b.k2 in (select 49);
ERROR 1105 (HY000): errCode = 2, detailMessage = Unexpected exception: Not support OnClause contain Subquery, expr:k2 IN (INSUBQUERY) (LogicalOneRowRelation ( projects=[49 AS `49`#28], buildUnionNode=true ))
```

* [opt](Nereids) perfer use datev2 / datetimev2 in date related functions (#20224)

1. update all date related functions' signatures order. 
1.1. if return value need to be compute with time info, args with datetimev2 at the top of the list, followed by datev2, datetime and date
1.2. if return value need to be compute with only date info, args with datev2 at the top of list, followed by datetimev2, date and datetime
2. Priority for use datev2, if we must cast date to datev2 or datetime/datetimev2

* [fix](dynamic partition) partition create failed after alter distributed column (#20239)

This pr fix following two problems:

Problem1: Alter column comment make add dynamic partition failed inside issue #10811

create table with dynamic partition policy;
restart FE;
alter distribution column comment;
alter dynamic_partition.end to trigger add new partition by dynamic partition scheduler;
Then we got the error log, and the new partition create failed.
dynamic add partition failed: errCode = 2, detailMessage =      Cannot assign hash distribution with different distribution cols. default is: [id int(11) NULL COMMENT 'new_comment_of_id'], db: default_cluster:example_db, table: test_2
Problem2: rename distributed column, make old partition insert failed. inside #20405

The key point of the reproduce steps is restart FE.

It seems all versions will be affected, include master and lts-1.1 and so on.

* [fix](memory) Fix query memory tracking #20253

The memory released by the query end is recorded in the query mem tracker, main memory in _runtime_state.
fix page no cache memory tracking
Now the main reason for the inaccurate query memory tracking is that the virtual memory used by the query is sometimes much larger than the actual memory. And the mem hook counts virtual memory.

* [fix](nereids) select with specified partition name is not work as expected (#20269)

This PR is to fix the select specific partition issue, certain codes related to this feature were accidentally deleted.

* [Optimize](function) Optimize locate function by compare across strings (#20290)

Optimize locate function by compare across strings. about 90% speed up test by sum()

* [Enchancement](Agg State)  storage function name and result is nullable in agg state type  (#20298)

storage function name and result is nullable in agg state type

* [fix](Nereids): fix filter can't be pushdown unionAll (#20310)

* [Feature](Nereids) support update unique table statement (#20313)

* [feature](profile)Add the filtering info of the in filter in profile #20321

image Currently, it is difficult to obtain the id of in filters,so, the some in filters's id is -1.

* [feature](planner)(nereids) support user defined variable (#20334)

Support user-defined variables.
After this PR, we can use `set @a = xx` to define a user variable and use it in the query like `select @a`.

the changes of this PR:
1. Support the grammar for `set user variable` in the parser.
2. Add the `userVars` in `VariableMgr` to store the user-defined variables.
3. For the `set @a = xx`, we will store the variable name and its value in the `userVars` in `VariableMgr`.
4. For the `select @a`, we will get the value for the variable name in `userVars`.

* [improve](nereids)derive analytics node stats (#20340)

1. derive analytic node stats, add support for rank()
2. filter estimation stats derive updated. update row count of filter column.
3. use ColumnStatistics.orginal to replace ColumnStatistics.orginalNdv, where ColumnStatistics.orginal is the column statisics get from TableScan.
TPCDS 70 on tpcds_sf100 improved from 23sec to 2 sec
This pr has no performance downgrade on other tpcds queries and tpch queries.

* [fix](nereids) avg size of column stats always be 0 (#20341)

it takes lot of effort to compute the avgSizeByte for col stats.
we use schema information to avoid compute actual average size

* [fix](stats) skip forbid_unknown_col_stats check for invisible column and internal db (#20362)

1. skip forbidUnknownColStats check for in-visible columns
2. use columsStatistics.isUnknown to tell if this stats is unknown
3. skip unknown stats check for internal schema

* [Fix](Nereids) Fix duplicated name in view does not throw exception (#20374)

When using nereids, if we have duplicated name in output of view, we need to throw an exception. A check rule was added in bindExpression rule set

* [fix](load) in strict mode, return error for insert if datatype convert fails (#20378)

* [fix](load) in strict mode, return error for load and insert if datatype convert fails

Revert "[fix](MySQL) the way Doris handles boolean type is consistent with MySQL (#19416)"

This reverts commit 68eb420cabe5b26b09d6d4a2724ae12699bdee87.

Since it changed other behaviours, e.g. in strict mode insert into t_int values ("a"),
it will result 0 is inserted into table, but it should return error instead.

* fix be ut

* fix regression tests

* [fix](nereids) change defaultConcreteType function's return value for decimal (#20380)

1. add default decimalv2 and decimalv3 for NullType
2. change defaultConcreteType of decimalv3 to this

* [performance](load) improve memtable sort performance (#20392)

* [fix][refactor](backend-policy)(compute) refactor the hierarchy of external scan node and fix compute node bug #20402

There should be 2 kinds of ScanNode:

OlapScanNode
ExternalScanNode
The Backends used for ExternalScanNode should be controlled by FederationBackendPolicy.
But currently, only FileScanNode is controlled by FederationBackendPolicy, other scan node such as MysqlScanNode,
JdbcScanNode will use Mix Backend even if we enable and prefer to use Compute Backend.

In this PR, I modified the hierarchy of ExternalScanNode, the new hierarchy is:

ScanNode
    OlapScanNode
    SchemaScanNode
    ExternalScanNode
        MetadataScanNode
        DataGenScanNode
        EsScanNode
        OdbcScanNode
        MysqlScanNode
        JdbcScanNode
        FileScanNode
            FileLoadScanNode
            FileQueryScanNode
                MaxComputeScanNode
                IcebergScanNode
                TVFScanNode
                HiveScanNode
                    HudiScanNode
And previously, the BackendPolicy is the member of FileScanNode, now I moved it to the ExternalScanNode.
So that all subtype ExternalScanNode can use BackendPolicy to choose Compute Backend to execute the query.

All all ExternalScanNode should implement the abstract method createScanRangeLocations().

For scan node like jdbc scan node/mysql scan node, the scan range locations will be selected randomly from
compute node(if preferred).

And for compute node selection. If all scan nodes are external scan nodes, and prefer_compute_node_for_external_table
is set to true, the BE for this query will only select compute nodes.

* [fix](sequence) value predicates shouldn't be push down when has sequence column (#20408)

* (fix)[sequence] value predicates shouldn't be push down when has sequence column

* add case

* [Fix] (tablet) fix tablet queryable set (#20413) (#20414)

* [fix](conf) fix fe host in doris-cluster.conf #20422

* [fix](workload-group)  fix workload group non-existence error (#20428)

* Fix query hang when using queue (#20434)

* [fix](execution) result_filter_data should be filled by 0 when can_filter_all is true (#20438)

* [fix](Nereids) throw NPE when sql cannot be parsed by all planner (#20440)

* [bug](jdbc) fix trino date/datetime filter (#20443)

When querying Trino's JDBC catalog, if our WHERE filter condition is k1 >= '2022-01-01', this format is incorrect. 
In Trino, the correct format should be k1 >= date '2022-01-01' or k1 >= timestamp '2022-01-01 00:00:00'. 
Therefore, the date string in the WHERE condition needs to be converted to the date or timestamp format supported by Trino.

* [fix](load) fix generate delete bitmap in memtable flush (#20446)

1. Generate delete bitmap for one segment at a time.
2. Generate delete bitmap before segment compaction.
Fix #20445

* [fix](executor)Fix duplicate timer and add open timer #20448

1 Currently, Node's total timer couter has timed twice(in Open and alloc_resource), this may cause timer in profile is not correct.
2 Add more timer to find more code which may cost much time.

* [improvement](column reader) lazy load indices (#20456)

Currently when reading column data, all types of indice are read even if they are not actually used, this PR implements lazy load of indices.

* [enhancement](profile) add build get child next time (#20460)

Currently, build time not include child(1)->get next time, it is very confusing during shared hash table scenario. So that I add a profile.

---------

Co-authored-by: yiguolei <yiguolei@gmail.com>

* [fix](regression) fix export file test cases (#20463)

* [Fix](WorkloadGroup)Fix query queue nereids bug #20484

* [fix](Nereids) join condition not extract as conjunctions (#20498)

* [fix](log) publish version log is printed too frequently (#20507)

* fix fe part compile error v1

* fix fe part compile error v2

* fix fe part compile error v3

* fix fe…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by one committer. area/nereids kind/test reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants