diff --git a/dashboard/dashboard-faq.md b/dashboard/dashboard-faq.md index b976e9663eb1d..21b115ca2eed0 100644 --- a/dashboard/dashboard-faq.md +++ b/dashboard/dashboard-faq.md @@ -57,7 +57,7 @@ If your deployment tool is TiUP, take the following steps to solve this problem. ### An `invalid connection` error is shown on the **Slow Queries** page -The possible reason is that you have enabled the Prepared Plan Cache feature of TiDB. As an experimental feature, when enabled, Prepared Plan Cache might not function properly in specific TiDB versions, which could cause this problem in TiDB Dashboard (and other applications). You can disable Prepared Plan Cache by setting the system variable `tidb_enable_prepared_plan_cache = OFF`. +The possible reason is that you have enabled the Prepared Plan Cache feature of TiDB. As an experimental feature, when enabled, Prepared Plan Cache might not function properly in specific TiDB versions, which could cause this problem in TiDB Dashboard (and other applications). You can disable Prepared Plan Cache by setting the system variable [`tidb_enable_prepared_plan_cache = OFF`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610). ### A `required component NgMonitoring is not started` error is shown diff --git a/grafana-tidb-dashboard.md b/grafana-tidb-dashboard.md index f539a536df1bc..19c88de113a01 100644 --- a/grafana-tidb-dashboard.md +++ b/grafana-tidb-dashboard.md @@ -83,6 +83,7 @@ To understand the key metrics displayed on the TiDB dashboard, check the followi - Execution Duration: the statistics of the execution time for SQL statements - Expensive Executor OPS: the statistics of the operators that consume many system resources per second, including `Merge Join`, `Hash Join`, `Index Look Up Join`, `Hash Agg`, `Stream Agg`, `Sort`, `TopN`, and so on - Queries Using Plan Cache OPS: the statistics of queries using the Plan Cache per second + - Plan Cache Miss OPS: the statistics of the number of times that the Plan Cache is missed per second - Distsql - Distsql Duration: the processing time of Distsql statements diff --git a/optimizer-hints.md b/optimizer-hints.md index 84ab465681dd6..42239b78e45f3 100644 --- a/optimizer-hints.md +++ b/optimizer-hints.md @@ -278,7 +278,7 @@ In addition to this hint, setting the `tidb_enable_index_merge` system variable > **Note:** > -> - `NO_INDEX_MERGE` has a higher priority over `USE_INDEX_MERGE`. When both hints are used, `USE_INDEX_MERGE` does not take effect. +> - `NO_INDEX_MERGE` has a higher priority over `USE_INDEX_MERGE`. When both hints are used, `USE_INDEX_MERGE` does not take effect. > - For a subquery, `NO_INDEX_MERGE` only takes effect when it is placed at the outermost level of the subquery. ### USE_TOJA(boolean_value) diff --git a/sql-prepared-plan-cache.md b/sql-prepared-plan-cache.md index cebfd810f1e63..0c04ae56a6e52 100644 --- a/sql-prepared-plan-cache.md +++ b/sql-prepared-plan-cache.md @@ -53,7 +53,7 @@ There are several points worth noting about execution plan caching and query per - Considering that the parameters of `Execute` are different, the execution plan cache prohibits some aggressive query optimization methods that are closely related to specific parameter values to ensure adaptability. This causes that the query plan may not be optimal for certain parameter values. For example, the filter condition of the query is `where a > ? And a < ?`, the parameters of the first `Execute` statement are `2` and `1` respectively. Considering that these two parameters maybe be `1` and `2` in the next execution time, the optimizer does not generate the optimal `TableDual` execution plan that is specific to current parameter values; - If cache invalidation and elimination are not considered, an execution plan cache is applied to various parameter values, which in theory also results in non-optimal execution plans for certain values. For example, if the filter condition is `where a < ?` and the parameter value used for the first execution is `1`, then the optimizer generates the optimal `IndexScan` execution plan and puts it into the cache. In the subsequent executions, if the value becomes `10000`, the `TableScan` plan might be the better one. But due to the execution plan cache, the previously generated `IndexScan` is used for execution. Therefore, the execution plan cache is more suitable for application scenarios where the query is simple (the ratio of compilation is high) and the execution plan is relatively fixed. -Since v6.1.0 the execution plan cache is enabled by default. You can control prepared plan cache via the system variable `tidb_enable_prepared_plan_cache`. +Since v6.1.0 the execution plan cache is enabled by default. You can control prepared plan cache via the system variable [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610). > **Note:** > @@ -123,6 +123,23 @@ MySQL [test]> select @@last_plan_from_cache; 1 row in set (0.00 sec) ``` +## Memory management of Prepared Plan Cache + +Using Prepared Plan Cache has some memory overhead. In internal tests, each cached plan consumes an average of 100 KiB of memory. Because Plan Cache is currently at the `SESSION` level, the total memory consumption is approximately `the number of sessions * the average number of cached plans in a session * 100 KiB`. + +For example, the current TiDB instance has 50 sessions in concurrency and each session has approximately 100 cached plans. The total memory consumption is approximately `50 * 100 * 100 KiB` = `512 MB`. + +You can control the maximum number of plans that can be cached in each session by configuring the system variable `tidb_prepared_plan_cache_size`. For different environments, the recommended value is as follows: + +- When the memory threshold of the TiDB server instance is <= 64 GiB, set `tidb_prepared_plan_cache_size` to `50`. +- When the memory threshold of the TiDB server instance is > 64 GiB, set `tidb_prepared_plan_cache_size` to `100`. + +When the unused memory of the TiDB server is less than a certain threshold, the memory protection mechanism of plan cache is triggered, through which some cached plans will be evicted. + +You can control the threshold by configuring the system variable `tidb_prepared_plan_cache_memory_guard_ratio`. The threshold is 0.1 by default, which means when the unused memory of the TiDB server is less than 10% of the total memory (90% of the memory is used), the memory protection mechanism is triggered. + +Due to memory limit, plan cache might be missed sometimes. You can check the status by viewing the [`Plan Cache Miss OPS` metric](/grafana-tidb-dashboard.md) in the Grafana dashboard. + ## Clear execution plan cache You can clear execution plan cache by executing the `ADMIN FLUSH [SESSION | INSTANCE] PLAN_CACHE` statement. diff --git a/system-variables.md b/system-variables.md index 19e9811a36d84..dbebbaa35fbca 100644 --- a/system-variables.md +++ b/system-variables.md @@ -1637,7 +1637,7 @@ explain select * from t where age=5; - Persists to cluster: Yes - Default value: `0.1` - Range: `[0, 1]` -- This setting is used to prevent the tidb.toml option `performance.max-memory` from being exceeded. When `max-memory` * (1 - `tidb_prepared_plan_cache_memory_guard_ratio`) is exceeded, the elements in the LRU are removed. +- The threshold at which the prepared plan cache triggers a memory protection mechanism. For details, see [Memory management of Prepared Plan Cache](/sql-prepared-plan-cache.md). - This setting was previously a `tidb.toml` option (`prepared-plan-cache.memory-guard-ratio`), but changed to a system variable starting from TiDB v6.1.0. ### tidb_prepared_plan_cache_size New in v6.1.0 @@ -1646,7 +1646,7 @@ explain select * from t where age=5; - Persists to cluster: Yes - Default value: `100` - Range: `[1, 100000]` -- The maximum number of statements that can be cached in the prepared plan cache. +- The maximum number of plans that can be cached in a session. For details, see [Memory management of Prepared Plan Cache](/sql-prepared-plan-cache.md). - This setting was previously a `tidb.toml` option (`prepared-plan-cache.capacity`), but changed to a system variable starting from TiDB v6.1.0. ### tidb_projection_concurrency