diff --git a/modules/n1ql/examples/n1ql-language-reference/alter-idx-move.n1ql b/modules/n1ql/examples/n1ql-language-reference/alter-idx-move.n1ql index 327e2cfda..ff5856448 100644 --- a/modules/n1ql/examples/n1ql-language-reference/alter-idx-move.n1ql +++ b/modules/n1ql/examples/n1ql-language-reference/alter-idx-move.n1ql @@ -1,2 +1,2 @@ ALTER INDEX def_inventory_airport_faa ON airport -WITH {"action": "move", "nodes": ["192.168.10.11:8091"]}; \ No newline at end of file +WITH {"action": "move", "nodes": ["svc-dqi-node-002:18091"]}; \ No newline at end of file diff --git a/modules/n1ql/pages/n1ql-language-reference/alterindex.adoc b/modules/n1ql/pages/n1ql-language-reference/alterindex.adoc index 0ded42173..ddbcb9932 100644 --- a/modules/n1ql/pages/n1ql-language-reference/alterindex.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/alterindex.adoc @@ -4,8 +4,8 @@ :page-toclevels: 2 :imagesdir: ../../assets/images -:rebalancing-the-index-service: xref:server:learn:clusters-and-availability/rebalance.adoc#rebalancing-the-index-service -:console-indexes: xref:server:manage:manage-ui/manage-ui.adoc#console-indexes +:rebalancing-the-index-service: xref:clusters:scale-database.adoc#rebalance +:console-indexes: xref:clusters:index-service/manage-indexes.adoc :query-context: xref:n1ql:n1ql-intro/queriesandresults.adoc#query-context :identifiers: xref:n1ql-language-reference/identifiers.adoc :logical-hierarchy: xref:n1ql:n1ql-intro/queriesandresults.adoc#logical-hierarchy @@ -382,18 +382,19 @@ When dropping a replica, the index topology does not change. The indexing service remembers the number of partitions and replicas specified for this index. Given sufficient capacity, the dropped replica is rebuilt after the next rebalance -- although it may be placed on a different index node, depending on the resource usage statistics of the available nodes. -To find the ID of an index replica and see which node it's placed on, you can use the {console-indexes}[Indexes screen in the Couchbase Web Console] or query the {querying-indexes}[system:indexes] catalog. +To find the ID of an index replica and see which node it's placed on, you can use the {console-indexes}[Indexes page in the Couchbase Capella UI] or query the {querying-indexes}[system:indexes] catalog. When dropping a replica, it's possible to leave a server group with no replica. For a partitioned index, run a rebalance to move a replica into the vacant server group. -ifdef::flag-devex-rest-api[] === Index Redistribution Using this statement to move 1 index at a time may be cumbersome if there are a lot of indexes to be moved. -The index redistribution setting enables you to specify how Couchbase Capella redistributes indexes automatically on rebalance. -For more information, see {rebalancing-the-index-service}[Rebalance]. +ifdef::flag-devex-rest-api[] +The index redistribution setting enables you to specify how endif::flag-devex-rest-api[] +Couchbase Capella redistributes indexes automatically on rebalance. +For more information, see {rebalancing-the-index-service}[Rebalance]. //end::usage[] //tag::return-value[] @@ -451,12 +452,10 @@ include::ROOT:partial$query-context.adoc[tag=section] .Move the `def_inventory_airport_faa` index from one node to another ==== -Create a cluster of 3 nodes and then go to menu:Settings[Sample buckets] to install the `travel-sample` bucket. +Create a cluster of 3 nodes and install the `travel-sample` bucket. The indexes are then installed in a round-robin fashion and distributed over the 3 nodes. -image::alter-index_servers_step1.png["The Indexes tab showing def_inventory_airport_faa on 192.168.10.10"] - -Then move the `def_inventory_airport_faa` index from its original node (192.168.10.*10* in this example) to a new node (192.168.10.*11* in this example). +Then move the `def_inventory_airport_faa` index from its original node (`svc-dqi-node-001` in this example) to a new node (`svc-dqi-node-002` in this example). [source,sqlpp] ---- @@ -469,48 +468,55 @@ You should see: ---- include::example$n1ql-language-reference/alter-idx-move.jsonc[] ---- - -image::alter-index_servers_step2.png["The Indexes tab showing def_inventory_airport_faa on 192.168.10.11"] ==== .Create and move an index replica from one node to another ==== -Create an index on node 192.168.10.10 with a replica on node 192.168.10.11, then move its replica from node 192.168.10.*11* to 192.168.10.*12*. +Create an index on node `svc-dqi-node-001` with a replica on node `svc-dqi-node-002`, then move its replica from node `svc-dqi-node-002` to `svc-dqi-node-003`. [source,sqlpp] ---- CREATE INDEX country_idx ON airport(country, city) USING GSI - WITH {"nodes": ["192.168.10.10:8091", "192.168.10.11:8091"]}; + WITH {"nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091"]}; ALTER INDEX country_idx ON airport -WITH {"action": "move", "nodes": ["192.168.10.10:8091", "192.168.10.12:8091"]}; +WITH {"action": "move", + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-003:18091"]}; ---- ==== .Moving multiple replicas ==== -Create an index on node 192.168.10.10 with replicas on nodes 192.168.10.*11* and 192.168.10.*12*, then move the replicas to nodes 192.168.10.*13* and 192.168.10.*14*. +Create an index on node `svc-dqi-node-001` with replicas on nodes `svc-dqi-node-002` and `svc-dqi-node-003`, then move the replicas to nodes `svc-dqi-node-004` and `svc-dqi-node-005`. [source,sqlpp] ---- CREATE INDEX country_idx ON airport(country, city) -WITH {"nodes": ["192.168.10.10:8091", "192.168.10.11:8091", "192.168.10.12:8091"]}; +WITH {"nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER INDEX country_idx ON airport WITH {"action": "move", - "nodes": ["192.168.10.10:8091", "192.168.10.13:8091", "192.168.10.14:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-004:18091", + "svc-dqi-node-005:18091"]}; ---- ==== .Increasing the number of replicas ==== -Create an index on node 192.168.10.10 with replicas on nodes 192.168.10.*11* and 192.168.10.*12*, then increase the number of replicas to 4 and specify that new replicas may be placed on any available index nodes in the cluster. +Create an index on node `svc-dqi-node-001` with replicas on nodes `svc-dqi-node-002` and `svc-dqi-node-003`, then increase the number of replicas to 4 and specify that new replicas may be placed on any available index nodes in the cluster. [source,sqlpp] ---- CREATE INDEX country_idx ON airport(country, city) -WITH {"nodes": ["192.168.10.10:8091", "192.168.10.11:8091", "192.168.10.12:8091"]}; +WITH {"nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER INDEX country_idx ON airport WITH {"action": "replica_count", "num_replica": 4}; @@ -519,32 +525,36 @@ WITH {"action": "replica_count", "num_replica": 4}; .Increasing the number of replicas and restricting the nodes ==== -Create an index on node 192.168.10.10 with replicas on nodes 192.168.10.*11* and 192.168.10.*12*, then increase the number of replicas to 4, and specify that replicas may now also be placed on nodes 192.168.10.*13* and 192.168.10.*14*. +Create an index on node `svc-dqi-node-001` with replicas on nodes `svc-dqi-node-002` and `svc-dqi-node-003`, then increase the number of replicas to 4, and specify that replicas may now also be placed on nodes `svc-dqi-node-004` and `svc-dqi-node-005`. [source,sqlpp] ---- CREATE INDEX country_idx ON airport(country, city) -WITH {"nodes": ["192.168.10.10:8091", "192.168.10.11:8091", "192.168.10.12:8091"]}; +WITH {"nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER INDEX country_idx ON airport WITH {"action": "replica_count", "num_replica": 4, - "nodes": ["192.168.10.10:8091", - "192.168.10.11:8091", - "192.168.10.12:8091", - "192.168.10.13:8091", - "192.168.10.14:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091", + "svc-dqi-node-004:18091", + "svc-dqi-node-005:18091"]}; ---- ==== .Decreasing the number of replicas ==== -Create an index on node 192.168.10.10 with replicas on nodes 192.168.10.*11* and 192.168.10.*12*, then decrease the number of replicas to 1. +Create an index on node `svc-dqi-node-001` with replicas on nodes `svc-dqi-node-002` and `svc-dqi-node-003`, then decrease the number of replicas to 1. [source,sqlpp] ---- CREATE INDEX country_idx ON airport(country, city) -WITH {"nodes": ["192.168.10.10:8091", "192.168.10.11:8091", "192.168.10.12:8091"]}; +WITH {"nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER INDEX country_idx ON airport WITH {"action": "replica_count", "num_replica": 1}; @@ -553,7 +563,7 @@ WITH {"action": "replica_count", "num_replica": 1}; .Dropping a specific replica ==== -Create an index with 2 replicas, and specify that nodes 192.168.10.10, 192.168.10.11, 192.168.10.12, and 192.168.10.13 should be available for index and replica placement. +Create an index with 2 replicas, and specify that nodes `svc-dqi-node-001`, `svc-dqi-node-002`, `svc-dqi-node-003`, and `svc-dqi-node-004` should be available for index and replica placement. Then delete replica 2. [source,sqlpp] @@ -561,10 +571,10 @@ Then delete replica 2. CREATE INDEX country_idx ON airport(country, city) USING GSI WITH {"num_replica": 2, - "nodes": ["192.168.10.10:8091", - "192.168.10.11:8091", - "192.168.10.12:8091", - "192.168.10.13:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091", + "svc-dqi-node-004:18091"]}; ALTER INDEX country_idx ON airport WITH {"action": "drop_replica", "replicaId": 2}; diff --git a/modules/n1ql/pages/n1ql-language-reference/altervectorindex.adoc b/modules/n1ql/pages/n1ql-language-reference/altervectorindex.adoc index 2aecc2c00..3e88a6132 100644 --- a/modules/n1ql/pages/n1ql-language-reference/altervectorindex.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/altervectorindex.adoc @@ -1,6 +1,5 @@ = ALTER VECTOR INDEX :description: The ALTER VECTOR INDEX statement moves the placement of an existing index or replica among different GSI nodes. -:page-edition: Enterprise Edition :page-status: Couchbase Server 8.0 :page-toclevels: 2 :imagesdir: ../../assets/images @@ -56,7 +55,7 @@ include::alterindex.adoc[tags=return-value] To try the examples in this section, you must do the following: . Create a cluster of 3 nodes. -The examples in this section assume that the 3 nodes have the IP addresses 172.19.0.2, 172.19.0.3, and 172.19.0.4. +The examples in this section assume that the 3 nodes have the names `svc-dqi-node-001`, `svc-dqi-node-002`, and `svc-dqi-node-003`. The nodes in your cluster may have different names or IP addresses. . Install the vector sample data as described in {prerequisites}[Prerequisites]. @@ -66,7 +65,7 @@ For more information, see xref:n1ql:n1ql-intro/queriesandresults.adoc#query-cont .Create and move an index from one node to another ==== -Create a Hyperscale Vector index on node 172.19.0.2. +Create a Hyperscale Vector index on node `svc-dqi-node-001`. [source,sqlpp] ---- @@ -75,15 +74,15 @@ CREATE VECTOR INDEX hyperscale_idx_move WITH {"dimension": 1536, "similarity": "L2", "description": "IVF8,SQ4", - "nodes": "172.19.0.2:8091"} + "nodes": "svc-dqi-node-001:18091"} ---- -Then move the index from its original node (172.19.0.*2* in this example) to a new node (172.19.0.*3* in this example). +Then move the index from its original node (`svc-dqi-node-001` in this example) to a new node (`svc-dqi-node-002` in this example). [source,sqlpp] ---- ALTER VECTOR INDEX hyperscale_idx_move ON rgb -WITH {"action": "move", "nodes": ["172.19.0.3:8091"]}; +WITH {"action": "move", "nodes": ["svc-dqi-node-002:18091"]}; ---- To check the node where the index is located, see xref:manage:manage-indexes/manage-indexes.adoc[]. @@ -91,7 +90,7 @@ To check the node where the index is located, see xref:manage:manage-indexes/man .Create and move an index replica from one node to another ==== -Create a Hyperscale Vector index on node 172.19.0.2 with a replica on node 172.19.0.3, then move its replica from node 172.19.0.*3* to 172.19.0.*4*. +Create a Hyperscale Vector index on node `svc-dqi-node-001` with a replica on node `svc-dqi-node-002`, then move its replica from node `svc-dqi-node-002` to `svc-dqi-node-003`. [source,sqlpp] ---- @@ -100,16 +99,19 @@ CREATE VECTOR INDEX hyperscale_rep_move WITH {"dimension": 1536, "similarity": "L2", "description": "IVF8,SQ4", - "nodes": ["172.19.0.2:8091", "172.19.0.3:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091"]}; ALTER VECTOR INDEX hyperscale_rep_move ON rgb -WITH {"action": "move", "nodes": ["172.19.0.2:8091", "172.19.0.4:8091"]}; +WITH {"action": "move", + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-003:18091"]}; ---- ==== .Increase the number of replicas ==== -Create a Hyperscale Vector index on node 172.19.0.2 with a replica on nodes 172.19.0.*3*, then increase the number of replicas to 2 and specify that new replicas may be placed on any available index nodes in the cluster. +Create a Hyperscale Vector index on node `svc-dqi-node-001` with a replica on nodes `svc-dqi-node-002`, then increase the number of replicas to 2 and specify that new replicas may be placed on any available index nodes in the cluster. [source,sqlpp] ---- @@ -118,7 +120,8 @@ CREATE VECTOR INDEX hyperscale_rep_multi WITH {"dimension": 1536, "similarity": "L2", "description": "IVF8,SQ4", - "nodes": ["172.19.0.2:8091", "172.19.0.3:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091"]}; ALTER VECTOR INDEX hyperscale_rep_multi ON rgb WITH {"action": "replica_count", "num_replica": 2}; @@ -127,7 +130,7 @@ WITH {"action": "replica_count", "num_replica": 2}; .Increase the number of replicas and specify the nodes ==== -Create a Hyperscale Vector index on node 172.19.0.2 with a replica on node 172.19.0.3, then increase the number of replicas to 2, and specify that replicas may be placed on nodes 172.19.0.*3* and 172.19.0.*4*. +Create a Hyperscale Vector index on node `svc-dqi-node-001` with a replica on node `svc-dqi-node-002`, then increase the number of replicas to 2, and specify that replicas may be placed on nodes `svc-dqi-node-002` and `svc-dqi-node-003`. [source,sqlpp] ---- @@ -136,20 +139,21 @@ CREATE VECTOR INDEX hyperscale_rep_increase WITH {"dimension": 1536, "similarity": "L2", "description": "IVF8,SQ4", - "nodes": ["172.19.0.2:8091", "172.19.0.3:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091"]}; ALTER VECTOR INDEX hyperscale_rep_increase ON rgb WITH {"action": "replica_count", "num_replica": 2, - "nodes": ["172.19.0.2:8091", - "172.19.0.3:8091", - "172.19.0.4:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ---- ==== .Decrease the number of replicas ==== -Create a Hyperscale Vector index on node 172.19.0.2 with replicas on nodes 172.19.0.*3* and 172.19.0.*4*, then decrease the number of replicas to 1. +Create a Hyperscale Vector index on node `svc-dqi-node-001` with replicas on nodes `svc-dqi-node-002` and `svc-dqi-node-003`, then decrease the number of replicas to 1. [source,sqlpp] ---- @@ -158,7 +162,9 @@ CREATE VECTOR INDEX hyperscale_rep_decrease WITH {"dimension": 1536, "similarity": "L2", "description": "IVF8,SQ4", - "nodes": ["172.19.0.2:8091", "172.19.0.3:8091", "172.19.0.4:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER VECTOR INDEX hyperscale_rep_decrease ON rgb WITH {"action": "replica_count", "num_replica": 1}; @@ -167,7 +173,7 @@ WITH {"action": "replica_count", "num_replica": 1}; .Drop a specific replica ==== -Create a Hyperscale Vector index with 2 replicas, and specify that nodes 172.19.0.2, 172.19.0.3, and 172.19.0.4 should be available for index and replica placement. +Create a Hyperscale Vector index with 2 replicas, and specify that nodes `svc-dqi-node-001`, `svc-dqi-node-002`, and `svc-dqi-node-003` should be available for index and replica placement. Then delete replica 2. [source,sqlpp] @@ -178,9 +184,9 @@ CREATE VECTOR INDEX hyperscale_rep_drop "similarity": "L2", "description": "IVF8,SQ4", "num_replica": 2, - "nodes": ["172.19.0.2:8091", - "172.19.0.3:8091", - "172.19.0.4:8091"]}; + "nodes": ["svc-dqi-node-001:18091", + "svc-dqi-node-002:18091", + "svc-dqi-node-003:18091"]}; ALTER VECTOR INDEX hyperscale_rep_drop ON rgb WITH {"action": "drop_replica", "replicaId": 2}; diff --git a/modules/n1ql/pages/n1ql-language-reference/auto-update-statistics.adoc b/modules/n1ql/pages/n1ql-language-reference/auto-update-statistics.adoc index ee718b729..fb40c0386 100644 --- a/modules/n1ql/pages/n1ql-language-reference/auto-update-statistics.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/auto-update-statistics.adoc @@ -1,6 +1,5 @@ = Auto Update Statistics :page-status: Couchbase Server 8.0 -:page-edition: Enterprise Edition :page-toclevels: 2 :description: Auto Update Statistics (AUS) automatically refreshes optimizer statistics, ensuring accurate and cost-effective query plans. @@ -9,43 +8,43 @@ == Overview -Auto Update Statistics (AUS) is a feature that keeps the optimizer statistics up to date by automatically identifying and refreshing outdated statistics. +Auto Update Statistics (AUS) is a feature that keeps the optimizer statistics up to date by automatically identifying and refreshing outdated statistics. Optimizer statistics are crucial as they help the xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc[Cost Based Optimizer] generate optimal query plans. -These statistics are initially created when you run the xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statement or build an index (available from 7.6.0 onwards). +These statistics are initially created when you run the xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statement or build an index (available from 7.6.0 onward). However, as data changes over time, the statistics can become stale, leading to sub-optimal query plans and reduced query performance. -To handle this, AUS executes a scheduled task on each query node in the cluster. +To handle this, AUS executes a scheduled task on each query node in the cluster. This task evaluates statistics based on expiration policies to identify outdated ones and then refreshes them by running the xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statement. -AUS can also optionally generate statistics for indexed expressions that do not already have them. +AUS can also optionally generate statistics for indexed expressions that do not already have them. NOTE: AUS maintains statistics only for expressions on index keys, and only for those indexed using the Plasma storage engine. -It does not support Memory-Optimized indexes. -For more information about these index storage types, see xref:indexes:storage-modes.adoc[]. +It does not support Memory-Optimized indexes. +For more information about these index storage types, see xref:indexes:storage-modes.adoc[]. [#availability] == Availability -AUS is available only in the Couchbase Enterprise Edition and on query nodes running version 8.0 or later. +AUS is available only in clusters running Couchbase Server version 8.0 or later. -* You can enable AUS in a cluster that has been fully migrated to 8.0, or in a cluster that includes both 7.6.x and 8.0 query nodes. -In such mixed clusters, the 7.6.x query nodes will not perform any AUS tasks. +* You can enable AUS in a cluster that has been fully migrated to 8.0, or in a cluster that includes both 7.6.x and 8.0 query nodes. +In such mixed clusters, the 7.6.x query nodes will not perform any AUS tasks. -* For clusters migrating from pre-7.6.x versions (to a configuration described above), the AUS task can only be enabled once the automatic migration of optimizer statistics to the `_query` collection in the `_system` scope of the buckets has been completed. +* For clusters migrating from pre-7.6.x versions (to a configuration described above), the AUS task can only be enabled once the automatic migration of optimizer statistics to the `_query` collection in the `_system` scope of the buckets has been completed. == How AUS Works -AUS is an opt-in feature that you must explicitly enable and schedule. -Once it is enabled and a schedule is set, all query nodes in the cluster participate in AUS, according to the same schedule. +AUS is an opt-in feature that you must explicitly enable and schedule. +Once it's enabled and a schedule is set, all query nodes in the cluster participate in AUS, according to the same schedule. === AUS Task Execution Each node receives its own AUS task, which performs the following actions during its scheduled window: -* The query node first selects specific collections for AUS processing, ensuring that no other query node updates the same collection during this period. +* The query node first selects specific collections for AUS processing, ensuring that no other query node updates the same collection during this period. -* Each selected collection then goes through two phases: <> and <>. -These phases process statistics gathered from expressions based on fields within that collection. +* Each selected collection then goes through two phases: <> and <>. +These phases process statistics gathered from expressions based on fields within that collection. * After AUS completes processing all statistics in all buckets, the query node schedules the next AUS run. @@ -75,9 +74,9 @@ database "Optimizer\nStatistics" as OptimizerStatistics start ..r..> AusEnabled : "Start \nAUS Process" AusEnabled -> ScheduledTask : "Yes" ScheduledTask -> CollectionSelection -CollectionSelection -> EvaluationPhase +CollectionSelection -> EvaluationPhase EvaluationPhase -> UpdatePhase -EvaluationPhase --> OptimizerStatistics +EvaluationPhase --> OptimizerStatistics UpdatePhase --> OptimizerStatistics UpdatePhase ..r..> end @@ -86,29 +85,29 @@ UpdatePhase ..r..> end === Evaluation Phase -In this phase, AUS evaluates whether existing statistics are stale based on the <>. +In this phase, AUS evaluates whether existing statistics are stale based on the <>. For each index, AUS assess how much data has changed since the last update of the optimizer statistics for the index's key expressions. If the percentage of change exceeds the defined threshold in the <>, the statistics are marked as stale. -Additionally, if configured to do so, this phase also identifies any indexed expressions that currently lack statistics and flags them for creation. +Additionally, if configured to do so, this phase also identifies any indexed expressions that currently lack statistics and flags them for creation. You can control this setting using the `create_missing_statistics` attribute in the <> catalog. === Update Phase -After the evaluation, AUS executes xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statements to refresh the statistics identified as stale. +After the evaluation, AUS executes xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statements to refresh the statistics identified as stale. When updating the existing statistics, AUS ensures that the refreshed statistics maintain the original xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc#resolution[resolution] at which they were collected. Also, if the `create_missing_statistics` option is set to `true`, AUS creates new optimizer statistics for indexed expressions that were flagged as missing during the evaluation phase. -The new statistics are created with the default xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc#resolution[resolution]. +The new statistics are created with the default xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc#resolution[resolution]. IMPORTANT: When AUS is first enabled, the initial task run might update all existing optimizer statistics, regardless of the expiration policy evaluation. -This is because the index change information might not have been recorded prior to this first run. +This is because the index change information might not have been recorded prior to this first run. [#expiration_policy] === Expiration Policy AUS uses expiration policies to determine when statistics are outdated and require an update. -The policy is based on the percentage of changes to data within an index. +The policy is based on the percentage of changes to data within an index. You can configure this value using the `change_percentage` attribute in the <> or <> catalogs. It defines how much data in an index must change before the statistics are considered outdated. @@ -120,7 +119,7 @@ The subsequent AUS operation then updates these statistics. To start using AUS for your cluster, you need to enable it and configure a schedule. You can configure AUS to run during off-peak hours or at specific times that align with your workload patterns. -AUS maintains its global configurations in the <> catalog. +AUS maintains its global configurations in the <> catalog. You can enable AUS and set its schedule by modifying the relevant configurations within this catalog. If you need more granular control, use the <> catalog to customize certain AUS configurations at the bucket, scope, and collection levels. @@ -132,17 +131,17 @@ For more information, see <>. === system:aus The `system:aus` catalog contains a single document that holds all the global configurations of AUS. -You can update this document to modify the settings. +You can update this document to modify the settings. [NOTE] ==== * Only SELECT and UPDATE DMLs are allowed on this keyspace. * To execute SELECT on `system:aus`, you need the `query_system_catalog` role. -* To execute UPDATE on `system:aus`, you need the `query_manage_system_catalog` role. +* To execute UPDATE on `system:aus`, you need the `query_manage_system_catalog` role. ==== -Each attribute in the document represents a particular global configuration. -The following are the attribute names and the configurations they represent: +Each attribute in the document represents a particular global configuration. +The following are the attribute names and the configurations they represent: [cols="1a,4a,1a"] |=== @@ -171,7 +170,7 @@ This attribute is required only if `enable` is set to `true`. | **change_percentage** + __required__ -| The percentage of change to items within an index that must be exceeded for the statistics to be refreshed. +| The percentage of change to items within an index that must be exceeded for the statistics to be refreshed. This is the threshold for determining whether the statistics are stale or not. The value must be an integer between `0` and `100`. @@ -180,9 +179,9 @@ For example, a value of `30` means that if 30% or more of the items in an index *Default:* `10` -| Integer +| Integer -| **all_buckets** + +| **all_buckets** + __required__ | Indicates whether AUS should be performed on all buckets or only those buckets whose metadata information is loaded on the query node. @@ -194,7 +193,7 @@ __required__ | **create_missing_statistics** + __required__ -| Indicates whether AUS should create statistics that are missing. +| Indicates whether AUS should create statistics that are missing. If set to `true`, AUS creates statistics for indexed expressions that do not have any existing statistics. The statistics will be created using the default value for the xref:n1ql:n1ql-language-reference/cost-based-optimizer.adoc#resolution[resolution] property. @@ -207,7 +206,7 @@ The statistics will be created using the default value for the xref:n1ql:n1ql-la [[aus_schedule]] -==== Schedule +==== Schedule [cols="1a,4a,1a"] |=== | Name | Description | Schema @@ -225,7 +224,7 @@ The `start_time` must be at least 30 minutes earlier than the `end_time`. | **end_time** + __required__ -| The end time of the AUS schedule in "HH:MM" format. +| The end time of the AUS schedule in "HH:MM" format. The `end_time` must be at least 30 minutes later than the `start_time`. @@ -235,9 +234,9 @@ The `end_time` must be at least 30 minutes later than the `start_time`. | **days** + __required__ -| An array of strings specifying the days on which the AUS schedule runs. +| An array of strings specifying the days on which the AUS schedule runs. -Valid values include: `Monday`, `Tuesday`, `Wednesday`, `Thursday`, `Friday`, `Saturday`, `Sunday`. +Valid values include: `Monday`, `Tuesday`, `Wednesday`, `Thursday`, `Friday`, `Saturday`, `Sunday`. *Example:* `["Saturday", "Sunday"]` @@ -246,7 +245,7 @@ Valid values include: `Monday`, `Tuesday`, `Wednesday`, `Thursday`, `Friday`, `S | **timezone** + __optional__ -| The timezone that applies to the schedule's start and end times. +| The timezone that applies to the schedule's start and end times. The value must be a valid IANA timezone string. *Default:* `"UTC"` @@ -257,7 +256,7 @@ The value must be a valid IANA timezone string. |=== -When changing the global configurations, it is important to consider the following: +When changing the global configurations, it is important to consider the following: * *Enabling AUS*: If AUS was previously disabled and is now enabled, the next AUS task will be scheduled immediately. * *Rescheduling AUS*: Currently scheduled AUS task will be cancelled, and a new AUS task will be scheduled according to the updated schedule. @@ -272,10 +271,10 @@ A sample UPDATE statement to enable AUS and set a schedule with some customizati [source,sqlpp] ---- UPDATE system:aus SET enable = true, change_percentage = 20, -schedule = { "start_time": "01:30", - "end_time": "04:30", - "timezone": "Asia/Calcutta", - "days": ["Monday", "Friday"] +schedule = { "start_time": "01:30", + "end_time": "04:30", + "timezone": "Asia/Calcutta", + "days": ["Monday", "Friday"] }; ---- ==== @@ -283,17 +282,17 @@ schedule = { "start_time": "01:30", [#system_aus_settings] === system:aus_settings -The `system:aus_settings` catalog stores granular configuration settings for AUS. +The `system:aus_settings` catalog stores granular configuration settings for AUS. These settings can be applied at the bucket, scope, and collection levels. -By default, this catalog has no documents, and the AUS settings for all keyspaces inherit the configurations defined at the global level. +By default, this catalog has no documents, and the AUS settings for all keyspaces inherit the configurations defined at the global level. In other words, unless you explicitly configure AUS for a specific keyspace, it will use the global AUS settings defined in <>. To customize AUS for a specific keyspace, you must insert a settings document into the `system:aus_settings` catalog. -The document ID of a document in this keyspace must be the full path of the bucket, scope, and collection. +The document ID of a document in this keyspace must be the full path of the bucket, scope, and collection. -Each attribute in the document represents a particular granular configuration. -The following are the attribute names and the configurations they represent: +Each attribute in the document represents a particular granular configuration. +The following are the attribute names and the configurations they represent: [cols="1a,4a,1a"] |=== @@ -302,15 +301,15 @@ The following are the attribute names and the configurations they represent: | **enable** + __optional__ | Indicates whether AUS is enabled for the bucket, scope, collection. + -Set it to `true` to enable AUS for the keyspace. +Set it to `true` to enable AUS for the keyspace. AUS settings are hierarchical and follow the order: cluster > bucket > scope > collection. + -If AUS is disabled at higher level, it cannot be enabled at a more granular level. +If AUS is disabled at higher level, it cannot be enabled at a more granular level. However, if AUS is enabled at a higher level, it can be disabled at a more granular level. For example, -- -* If AUS is disabled for a bucket, it is automatically disabled for all scopes and collections within it. +* If AUS is disabled for a bucket, it is automatically disabled for all scopes and collections within it. The setting cannot be overridden at the scope or collection level. * If AUS is enabled for a bucket, it can be overridden at the scope and collection level. -- @@ -320,11 +319,11 @@ The setting cannot be overridden at the scope or collection level. | **change_percentage** + __optional__ -| The percentage of change to items within an index that must be exceeded for the statistics to be refreshed. +| The percentage of change to items within an index that must be exceeded for the statistics to be refreshed. The value must be an integer between `0` and `100`. -If set at a bucket level, this value applies to all scopes and collections within the bucket, unless overridden at a lower level. +If set at a bucket level, this value applies to all scopes and collections within the bucket, unless overridden at a lower level. If set at a scope level, this value applies to all collections within the scope, unless overridden at a lower level. @@ -335,13 +334,13 @@ If set at a scope level, this value applies to all collections within the scope, | **update_statistics_timeout** + __optional__ -| The timeout period for the xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] command. -It is a number representing a duration in seconds. +| The timeout period for the xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] command. +It's a number representing a duration in seconds. -If the command does not complete within this duration, it times out. +If the command does not complete within this duration, it times out. If omitted, a default timeout value is calculated based on the number of samples used. -If set for a keyspace, this timeout applies to every xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statement that AUS executes for that keyspace. +If set for a keyspace, this timeout applies to every xref:n1ql:n1ql-language-reference/updatestatistics.adoc[] statement that AUS executes for that keyspace. If set at a bucket level, this timeout applies to all scopes and collections within the bucket, unless a different value is set at a lower level. @@ -353,14 +352,14 @@ If set at a scope level, this timeout applies to all collections within the scop [NOTE] ==== -* All SQL++ DMLs are allowed on this keyspace. +* All SQL++ DMLs are allowed on this keyspace. * To execute SELECT on `system:aus_settings`, you need the `query_system_catalog` role. * To execute UPDATE, DELETE, INSERT, and UPSERT on `system:aus_settings`, you need the `query_manage_system_catalog` role. ==== ==== Example -==== -A sample query to add a scope level setting that applies to all collections within the scope. +==== +A sample query to add a scope level setting that applies to all collections within the scope. .Query [source,sqlpp] @@ -387,7 +386,7 @@ To view all recent AUS tasks, use the following query: SELECT * FROM system:tasks_cache WHERE class = "auto_update_statistics"; ---- -This query returns all AUS entries regardless of their state (scheduled, running, completed, etc.). +This query returns all AUS entries regardless of their state (scheduled, running, completed, etc.). To get the details of completed tasks, see <>. === Find Scheduled AUS Tasks @@ -400,13 +399,13 @@ SELECT * FROM system:tasks_cache WHERE class = "auto_update_statistics" AND stat ---- === View AUS Tasks on a Particular Node - -To view recent AUS tasks on a particular node, filter by the `node` attribute. + +To view recent AUS tasks on a particular node, filter by the `node` attribute. [source,sqlpp] ---- -SELECT * FROM system:tasks_cache WHERE class = "auto_update_statistics" - AND state = "scheduled" +SELECT * FROM system:tasks_cache WHERE class = "auto_update_statistics" + AND state = "scheduled" AND node = "127.0.0.1:8091"; // Replace with the actual node address ---- @@ -469,14 +468,14 @@ You can cancel AUS tasks that are currently running or scheduled to run. * <> * <> -CAUTION: When cancelling AUS tasks, it is important to include appropriate WHERE clauses to specify exactly which tasks you want to cancel. +CAUTION: When cancelling AUS tasks, it's important to include appropriate WHERE clauses to specify exactly which tasks you want to cancel. Make sure your filters target only the intended tasks, otherwise they might inadvertently cancel other tasks or delete task history. [#cancel_running_aus_tasks] === Cancel Running AUS Tasks To cancel a running AUS task, delete its entry from the `system:tasks_cache` catalog. -When you delete a task that is in the `scheduled` or `running` state, AUS cancels the task and schedules the next one automatically. +When you delete a task that's in the `scheduled` or `running` state, AUS cancels the task and schedules the next one automatically. To cancel all running AUS tasks, use the following DELETE statement: @@ -489,16 +488,16 @@ To cancel a running AUS task on a specific node, include the node's address in t [source,sqlpp] ---- -DELETE FROM system:tasks_cache - WHERE class = "auto_update_statistics" - AND state = "running" +DELETE FROM system:tasks_cache + WHERE class = "auto_update_statistics" + AND state = "running" AND node = "127.0.0.1:8091"; // Replace with the actual node address ---- [#cancel_next_scheduled_aus_tasks] === Cancel Next Scheduled AUS Tasks -To cancel an upcoming scheduled AUS task, you need to temporarily modify its schedule in the `system:aus` catalog. +To cancel an upcoming scheduled AUS task, you need to temporarily modify its schedule in the `system:aus` catalog. After the scheduled time has passed, you can revert it to its original schedule. ==== Temporarily Update the Schedule @@ -506,7 +505,7 @@ After the scheduled time has passed, you can revert it to its original schedule. First, identify the specific AUS task you want to skip or cancel. Then, use an UPDATE statement to exclude the day or time from its schedule. -For example, if your AUS tasks run on Monday, Wednesday, and Friday, and you want to cancel the upcoming Monday run: +For example, if your AUS tasks run on Monday, Wednesday, and Friday, and you want to cancel the upcoming Monday run: [source,sqlpp] ---- @@ -515,10 +514,10 @@ UPDATE system:aus SET schedule.days = ["Wednesday", "Friday"]; ==== Revert the Schedule -After the day and time for the cancelled task have passed, you can revert the schedule to its original settings. +After the day and time for the cancelled task have passed, you can revert the schedule to its original settings. This allows your AUS tasks to resume their regular schedule for all subsequent runs. -For example, to restore the Monday, Wednesday, and Friday schedule after skipping the Monday run: +For example, to restore the Monday, Wednesday, and Friday schedule after skipping the Monday run: [source,sqlpp] ---- @@ -528,10 +527,10 @@ UPDATE system:aus SET schedule.days = ["Monday", "Wednesday", "Friday"]; == Manage AUS Load When an AUS task runs, it can increase the load on the query node as it evaluates and updates statistics. -Therefore, to minimize performance impact, it is important to schedule AUS to best suit the workloads of your cluster. +Therefore, to minimize performance impact, it's important to schedule AUS to best suit the workloads of your cluster. To prevent excessive load, the AUS task will not start if the query node's load is too high during the scheduled window. -In such cases, the task is skipped, and the next AUS task is scheduled. +In such cases, the task is skipped, and the next AUS task is scheduled. == Related Links diff --git a/modules/n1ql/pages/n1ql-language-reference/createindex.adoc b/modules/n1ql/pages/n1ql-language-reference/createindex.adoc index 890ba404a..9a073ddc1 100644 --- a/modules/n1ql/pages/n1ql-language-reference/createindex.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/createindex.adoc @@ -8,7 +8,7 @@ :authorization-overview: xref:server:learn:security/authorization-overview.adoc :index-replication: xref:indexes:index-replication.adoc#index-replication -:console-indexes: xref:server:manage:manage-ui/manage-ui.adoc#console-indexes +:console-indexes: xref:clusters:index-service/manage-indexes.adoc :query-context: xref:n1ql:n1ql-intro/queriesandresults.adoc#query-context :build-index: xref:n1ql-language-reference/build-index.adoc :identifiers: xref:n1ql-language-reference/identifiers.adoc @@ -578,11 +578,9 @@ For details and examples, see {operator-pushdowns}[Operator Pushdowns]. [[index-replicas]] === Index Replicas -In the {console-indexes}[Indexes screen in the Couchbase Web Console], index replicas are marked with their replica ID. +In the {console-indexes}[Indexes page in the Couchbase Capella UI], index replicas are marked with their replica ID. -image::create-index-replica-id.png["The Indexes screen showing an index and index replica with replica ID"] - -If you select `view by server node` from the drop-down menu, you can see the server node where each index and index replica is placed. +To see the nodes where an index and its replicas are placed, click the name of an index or index replica to display the index definition. You can also query the {querying-indexes}[system:indexes] catalog to find the ID of an index replica and see which node it's placed on. diff --git a/modules/n1ql/pages/n1ql-language-reference/createprimaryindex.adoc b/modules/n1ql/pages/n1ql-language-reference/createprimaryindex.adoc index 80e10e769..cad85ae6f 100644 --- a/modules/n1ql/pages/n1ql-language-reference/createprimaryindex.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/createprimaryindex.adoc @@ -276,8 +276,11 @@ It's possible that the indexer may throw scan timeout without returning any prim For example, if the indexer cannot find a snapshot that satisfies the consistency guarantee of the query within the timeout limit, it will timeout without returning any primary keys. For secondary index scans, the query engine does not handle scan timeout, and returns index scan timeout error to the client. -You can handle scan timeout on a secondary index by increasing the indexer timeout setting (see -{query-settings}[Query Settings]) or preferably by defining and using a more selective index. +You can handle scan timeout on a secondary index by +ifdef::flag-devex-rest-api[] +increasing the indexer timeout setting (see {query-settings}[Query Settings]), or preferably by +endif::flag-devex-rest-api[] +defining and using a more selective index. == Examples diff --git a/modules/n1ql/pages/n1ql-language-reference/index-partitioning.adoc b/modules/n1ql/pages/n1ql-language-reference/index-partitioning.adoc index 8d2fdb0a3..991c3b414 100644 --- a/modules/n1ql/pages/n1ql-language-reference/index-partitioning.adoc +++ b/modules/n1ql/pages/n1ql-language-reference/index-partitioning.adoc @@ -598,8 +598,10 @@ For example: * A cluster has a mix of non-partitioned indexes and partitioned indexes. * There is data skew in the partitions. -In Couchbase Capella, the [def]_index redistribution_ setting enables you to specify how Couchbase Capella redistributes indexes on rebalance. -For further details, refer to {rebalancing-the-index-service}[Rebalancing the Index Service]. +ifdef::flag-devex-rest-api[] +The index redistribution setting enables you to specify how Couchbase Capella redistributes indexes automatically on rebalance. +endif::flag-devex-rest-api[] +For more information, see {rebalancing-the-index-service}[Rebalance]. == Repairing Failed Partitions