diff --git a/src/current/_includes/v23.2/metric-names.md b/src/current/_includes/v23.2/metric-names.md index fa45371b4fa..5826fbd9319 100644 --- a/src/current/_includes/v23.2/metric-names.md +++ b/src/current/_includes/v23.2/metric-names.md @@ -269,6 +269,8 @@ Name | Description `sql.txn.begin.count` | Number of SQL transaction BEGIN statements `sql.txn.commit.count` | Number of SQL transaction COMMIT statements `sql.txn.contended.count` | Number of SQL transactions that experienced contention +`sql.txn.isolation.executed_at.read_committed` | Number of times a [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transaction was executed. +`sql.txn.isolation.upgraded_from.read_committed` | Number of times a [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transaction was automatically upgraded to a stronger isolation level. `sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements `sql.update.count` | Number of SQL UPDATE statements `storage.l0-level-score` | Compaction score of level 0 diff --git a/src/current/_includes/v24.1/app/retry-errors.md b/src/current/_includes/v24.1/app/retry-errors.md index aa5c336e30a..67945e27681 100644 --- a/src/current/_includes/v24.1/app/retry-errors.md +++ b/src/current/_includes/v24.1/app/retry-errors.md @@ -1,3 +1,3 @@ {{site.data.alerts.callout_info}} -Your application should [use a retry loop to handle transaction errors]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#transaction-retry-errors) that can occur under [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). +When running under the default [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation level, your application should [use a retry loop to handle transaction errors]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#transaction-retry-errors) that can occur under [contention]({{ link_prefix }}performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. {{site.data.alerts.end}} diff --git a/src/current/_includes/v24.1/faq/clock-synchronization-effects.md b/src/current/_includes/v24.1/faq/clock-synchronization-effects.md index daa32468d85..e335a97fc3e 100644 --- a/src/current/_includes/v24.1/faq/clock-synchronization-effects.md +++ b/src/current/_includes/v24.1/faq/clock-synchronization-effects.md @@ -1,6 +1,6 @@ CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed, it spontaneously shuts down. This offset defaults to 500ms but can be changed via the [`--max-offset`]({% link {{ page.version.version }}/cockroach-start.md %}#flags-max-offset) flag when starting each node. -While [serializable consistency](https://wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. +Regardless of clock skew, [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) and [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transactions both serve globally consistent ("non-stale") reads and [commit atomically]({% link {{ page.version.version }}/developer-basics.md %}#how-transactions-work-in-cockroachdb). However, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. In very rare cases, CockroachDB can momentarily run with a stale clock. This can happen when using vMotion, which can suspend a VM running CockroachDB, migrate it to different hardware, and resume it. This will cause CockroachDB to be out of sync for a short period before it jumps to the correct time. During this window, it would be possible for a client to read stale data and write data derived from stale reads. By enabling the `server.clock.forward_jump_check_enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}), you can be alerted when the CockroachDB clock jumps forward, indicating it had been running with a stale clock. To protect against this on vMotion, however, use the [`--clock-device`](cockroach-start.html#general) flag to specify a [PTP hardware clock](https://www.kernel.org/doc/html/latest/driver-api/ptp.html) for CockroachDB to use when querying the current time. When doing so, you should not enable `server.clock.forward_jump_check_enabled` because forward jumps will be expected and harmless. For more information on how `--clock-device` interacts with vMotion, see [this blog post](https://core.vmware.com/blog/cockroachdb-vmotion-support-vsphere-7-using-precise-timekeeping). diff --git a/src/current/_includes/v24.1/metric-names.md b/src/current/_includes/v24.1/metric-names.md index fa45371b4fa..5826fbd9319 100644 --- a/src/current/_includes/v24.1/metric-names.md +++ b/src/current/_includes/v24.1/metric-names.md @@ -269,6 +269,8 @@ Name | Description `sql.txn.begin.count` | Number of SQL transaction BEGIN statements `sql.txn.commit.count` | Number of SQL transaction COMMIT statements `sql.txn.contended.count` | Number of SQL transactions that experienced contention +`sql.txn.isolation.executed_at.read_committed` | Number of times a [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transaction was executed. +`sql.txn.isolation.upgraded_from.read_committed` | Number of times a [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transaction was automatically upgraded to a stronger isolation level. `sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements `sql.update.count` | Number of SQL UPDATE statements `storage.l0-level-score` | Compaction score of level 0 diff --git a/src/current/_includes/v24.1/misc/database-terms.md b/src/current/_includes/v24.1/misc/database-terms.md index d302b46ab7a..78663985607 100644 --- a/src/current/_includes/v24.1/misc/database-terms.md +++ b/src/current/_includes/v24.1/misc/database-terms.md @@ -19,7 +19,7 @@ A set of operations performed on a database that satisfy the requirements of [AC A [state of conflict]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) that occurs when: - A [transaction]({% link {{ page.version.version }}/transactions.md %}) is unable to complete due to another concurrent or recent transaction attempting to write to the same data. This is also called *lock contention*. -- A transaction is [automatically retried]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) because it could not be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently executing transactions. This is also called a *serializability conflict*. If the automatic retry is not possible or fails, a [*transaction retry error*](../transaction-retry-error-reference.html) is emitted to the client, requiring the client application to [retry the transaction](../transaction-retry-error-reference.html#client-side-retry-handling). +- A transaction is [automatically retried]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) because it could not be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently executing transactions. This is also called a *serialization conflict*. If the automatic retry is not possible or fails, a [*transaction retry error*](../transaction-retry-error-reference.html) is emitted to the client, requiring a client application running under `SERIALIZABLE` isolation to [retry the transaction](../transaction-retry-error-reference.html#client-side-retry-handling). Steps should be taken to [reduce transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#reduce-transaction-contention) in the first place. diff --git a/src/current/_includes/v24.1/misc/enterprise-features.md b/src/current/_includes/v24.1/misc/enterprise-features.md index 3d0f06ca582..258370890be 100644 --- a/src/current/_includes/v24.1/misc/enterprise-features.md +++ b/src/current/_includes/v24.1/misc/enterprise-features.md @@ -2,6 +2,7 @@ Feature | Description --------+------------------------- +[Read Committed isolation]({% link {{ page.version.version }}/read-committed.md %}) | Achieve predictable query performance at high workload concurrencies, but without guaranteed transaction serializability. [Follower Reads]({% link {{ page.version.version }}/follower-reads.md %}) | Reduce read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data. [Multi-Region Capabilities]({% link {{ page.version.version }}/multiregion-overview.md %}) | Row-level control over where your data is stored to help you reduce read and write latency and meet regulatory requirements. [PL/pgSQL]({% link {{ page.version.version }}/plpgsql.md %}) | Use a procedural language in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) and [stored procedures]({% link {{ page.version.version }}/stored-procedures.md %}) to improve performance and enable more complex queries. diff --git a/src/current/_includes/v24.1/performance/increase-server-side-retries.md b/src/current/_includes/v24.1/performance/increase-server-side-retries.md index a461aac5804..f43b8a9abf8 100644 --- a/src/current/_includes/v24.1/performance/increase-server-side-retries.md +++ b/src/current/_includes/v24.1/performance/increase-server-side-retries.md @@ -1,3 +1,5 @@ - [Send statements in transactions as a single batch]({% link {{ page.version.version }}/transactions.md %}#batched-statements). Batching allows CockroachDB to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a transaction when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a multi-statement transaction is not batched, and takes more than a single round trip, CockroachDB cannot automatically retry the transaction. For an example showing how to break up large transactions in an application, see [Break up large transactions into smaller units of work](build-a-python-app-with-cockroachdb-sqlalchemy.html#break-up-large-transactions-into-smaller-units-of-work). + + - Limit the size of the result sets of your transactions to under 16KB, so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set over 16KB, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the `sql.defaults.results_buffer.size` [cluster setting](cluster-settings.html), or for a specific session using the `results_buffer_size` [session variable](set-vars.html). \ No newline at end of file diff --git a/src/current/_includes/v24.1/performance/transaction-retry-error-actions.md b/src/current/_includes/v24.1/performance/transaction-retry-error-actions.md index d78282c174c..b528f7b4f84 100644 --- a/src/current/_includes/v24.1/performance/transaction-retry-error-actions.md +++ b/src/current/_includes/v24.1/performance/transaction-retry-error-actions.md @@ -1,5 +1,5 @@ In most cases, the correct actions to take when encountering transaction retry errors are: -1. Update your application to support [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) when transaction retry errors are encountered. Follow the guidance for the [specific error type]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#transaction-retry-error-reference). +1. Under `SERIALIZABLE` isolation, update your application to support [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) when transaction retry errors are encountered. Follow the guidance for the [specific error type]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#transaction-retry-error-reference). 1. Take steps to [minimize transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#minimize-transaction-retry-errors) in the first place. This means reducing transaction contention overall, and increasing the likelihood that CockroachDB can [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a failed transaction. \ No newline at end of file diff --git a/src/current/_includes/v24.1/sql/isolation-levels.md b/src/current/_includes/v24.1/sql/isolation-levels.md index df440bda49a..f6203ad37c2 100644 --- a/src/current/_includes/v24.1/sql/isolation-levels.md +++ b/src/current/_includes/v24.1/sql/isolation-levels.md @@ -1,5 +1,5 @@ Isolation is an element of [ACID transactions](https://en.wikipedia.org/wiki/ACID) that determines how concurrency is controlled, and ultimately guarantees consistency. CockroachDB offers two transaction isolation levels: [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) and [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}). -By default, CockroachDB executes all transactions at the strongest ANSI transaction isolation level: `SERIALIZABLE`, which permits no concurrency anomalies. To place all transactions in a serializable ordering, `SERIALIZABLE` isolation may require [transaction restarts]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}). For a demonstration of how `SERIALIZABLE` prevents write skew anomalies, see [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %}). +By default, CockroachDB executes all transactions at the strongest ANSI transaction isolation level: `SERIALIZABLE`, which permits no concurrency anomalies. To place all transactions in a serializable ordering, `SERIALIZABLE` isolation may require [transaction restarts]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) and [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). For a demonstration of how `SERIALIZABLE` prevents anomalies such as write skew, see [Serializable Transactions]({% link {{ page.version.version }}/demo-serializable.md %}). -CockroachDB can be configured to execute transactions at [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) instead of `SERIALIZABLE` isolation. If [enabled]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation), `READ COMMITTED` is no longer an alias for `SERIALIZABLE` . `READ COMMITTED` permits some concurrency anomalies in exchange for minimizing transaction aborts and [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries). Depending on your workload requirements, this may be desirable. For more information, see [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}). \ No newline at end of file +CockroachDB can be configured to execute transactions at [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) instead of `SERIALIZABLE` isolation. If [enabled]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation), `READ COMMITTED` is no longer an alias for `SERIALIZABLE` . `READ COMMITTED` permits some concurrency anomalies in exchange for minimizing transaction aborts and removing the need for client-side retries. Depending on your workload requirements, this may be desirable. For more information, see [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}). \ No newline at end of file diff --git a/src/current/v23.2/read-committed.md b/src/current/v23.2/read-committed.md index 7f3a5b5b9a4..ec7a23d1426 100644 --- a/src/current/v23.2/read-committed.md +++ b/src/current/v23.2/read-committed.md @@ -34,12 +34,14 @@ To make `READ COMMITTED` isolation available to use on a cluster, enable the fol SET CLUSTER SETTING sql.txn.read_committed_isolation.enabled = 'true'; ~~~ -After you enable the cluster setting, you can set `READ COMMITTED` as the [default isolation level](#set-the-default-isolation-level-to-read-committed) or [begin a transaction](#set-the-current-transaction-to-read-committed) as `READ COMMITTED`. +In v23.2, `sql.txn.read_committed_isolation.enabled` is `false` by default. As a result, `READ COMMITTED` transactions are [automatically upgraded to `SERIALIZABLE`]({% link {{ page.version.version }}/transactions.md %}#aliases) unless this setting is enabled. **This differs in v24.1 and later**, where `sql.txn.read_committed_isolation.enabled` is `true` by default. -{{site.data.alerts.callout_info}} -If the cluster setting is not enabled, `READ COMMITTED` transactions will run as `SERIALIZABLE`. +{{site.data.alerts.callout_success}} +Because of this change, upgrading to a later CockroachDB version may affect your application behavior. Check the [**Upgrades of SQL Transaction Isolation Level**]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#upgrades-of-sql-transaction-isolation-level) graph in the DB Console to see whether any transactions are being upgraded to `SERIALIZABLE`. On v24.1 and later, `READ COMMITTED` transactions will run as `READ COMMITTED` unless you set `sql.txn.read_committed_isolation.enabled` explicitly to `false`. {{site.data.alerts.end}} +After you enable the cluster setting, you can set `READ COMMITTED` as the [default isolation level](#set-the-default-isolation-level-to-read-committed) or [begin a transaction](#set-the-current-transaction-to-read-committed) as `READ COMMITTED`. + ### Set the default isolation level to `READ COMMITTED` To set all future transactions to run at `READ COMMITTED` isolation, use one of the following options: diff --git a/src/current/v23.2/ui-sql-dashboard.md b/src/current/v23.2/ui-sql-dashboard.md index 6a28d5a9acd..95c355479a2 100644 --- a/src/current/v23.2/ui-sql-dashboard.md +++ b/src/current/v23.2/ui-sql-dashboard.md @@ -33,6 +33,14 @@ The **SQL Connection Rate** is an average of the number of connection attempts p - In the cluster view, the graph shows the rate of SQL connection attempts to all nodes, with lines for each node. +## Upgrades of SQL Transaction Isolation Level + +- In the node view, the graph shows the total number of times a SQL transaction was upgraded to a stronger isolation level on the selected node. + +- In the cluster view, the graph shows the total number of times a SQL transaction was upgraded to a stronger isolation level across all nodes. + +If this metric is non-zero, then transactions at weaker isolation levels (such as [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %})) are being upgraded to [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) instead. To ensure that `READ COMMITTED` transactions run as `READ COMMITTED`, see [Enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation). + ## Open SQL Transactions - In the node view, the graph shows the total number of open SQL transactions on the node. diff --git a/src/current/v24.1/advanced-client-side-transaction-retries.md b/src/current/v24.1/advanced-client-side-transaction-retries.md index fd7b9ee4087..be4bc115727 100644 --- a/src/current/v24.1/advanced-client-side-transaction-retries.md +++ b/src/current/v24.1/advanced-client-side-transaction-retries.md @@ -5,6 +5,10 @@ toc: true docs_area: develop --- +{{site.data.alerts.callout_info}} +Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + This page has instructions for authors of [database drivers and ORMs]({% link {{ page.version.version }}/install-client-drivers.md %}) who would like to implement client-side retries in their database driver or ORM for maximum efficiency and ease of use by application developers. {{site.data.alerts.callout_info}} diff --git a/src/current/v24.1/architecture/transaction-layer.md b/src/current/v24.1/architecture/transaction-layer.md index 8a887a54710..7418918a259 100644 --- a/src/current/v24.1/architecture/transaction-layer.md +++ b/src/current/v24.1/architecture/transaction-layer.md @@ -95,7 +95,7 @@ This then lets the node primarily responsible for the range (i.e., the leasehold CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the [maximum offset allowed]({% link {{ page.version.version }}/cockroach-start.md %}#flags-max-offset), **it crashes immediately**. -While [serializable consistency](https://wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. +While [serializable consistency](https://wikipedia.org/wiki/Serializability) is maintained under `SERIALIZABLE` isolation regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. For more detail about the risks that large clock offsets can cause, see [What happens when node clocks are not properly synchronized?]({% link {{ page.version.version }}/operational-faqs.md %}#what-happens-when-node-clocks-are-not-properly-synchronized) @@ -105,7 +105,7 @@ Whenever an operation reads a value, CockroachDB stores the operation's timestam The timestamp cache is a data structure used to store information about the reads performed by [leaseholders]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases). This is used to ensure that once some transaction *t1* reads a row, another transaction *t2* that comes along and tries to write to that row will be ordered after *t1*, thus ensuring a serial order of transactions, aka serializability. -Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is earlier than the timestamp cache's latest value, CockroachDB will attempt to push the timestamp for its transaction forward to a later time. Pushing the timestamp might cause the transaction to restart [during the commit time](#commits-phase-2) of the transaction (see [read refreshing](#read-refreshing)). +Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is earlier than the timestamp cache's latest value, CockroachDB will attempt to push the timestamp for its transaction forward to a later time. Pushing the timestamp might cause the transaction to restart [during the commit time](#commits-phase-2) of the transaction under `SERIALIZABLE` isolation (see [read refreshing](#read-refreshing)). #### Read snapshots @@ -121,7 +121,7 @@ Per-statement read snapshots enable `READ COMMITTED` transactions to resolve [se ### Closed timestamps -Each CockroachDB range tracks a property called its _closed timestamp_, which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) (see [read refreshing](#read-refreshing)). +Each CockroachDB range tracks a property called its _closed timestamp_, which means that no new writes can ever be introduced at or below that timestamp. The closed timestamp is advanced continuously on the leaseholder, and lags the current time by some target interval. As the closed timestamp is advanced, notifications are sent to each follower. If a range receives a write at a timestamp less than or equal to its closed timestamp, the write is forced to change its timestamp, which might result in a [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) under `SERIALIZABLE` isolation (see [read refreshing](#read-refreshing)). In other words, a closed timestamp is a promise by the range's [leaseholder]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leases) to its follower replicas that it will not accept writes below that timestamp. Generally speaking, the leaseholder continuously closes timestamps a few seconds in the past. diff --git a/src/current/v24.1/build-a-nodejs-app-with-cockroachdb.md b/src/current/v24.1/build-a-nodejs-app-with-cockroachdb.md index b12b32fac63..e0f3e3f88a2 100644 --- a/src/current/v24.1/build-a-nodejs-app-with-cockroachdb.md +++ b/src/current/v24.1/build-a-nodejs-app-with-cockroachdb.md @@ -11,6 +11,10 @@ docs_area: get_started This tutorial shows you how build a simple Node.js application with CockroachDB and the [node-postgres driver](https://node-postgres.com/). +{{site.data.alerts.callout_info}} +This tutorial assumes you are running under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation. Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + ## Step 1. Start CockroachDB {% include {{ page.version.version }}/setup/sample-setup.md %} diff --git a/src/current/v24.1/build-a-spring-app-with-cockroachdb-jdbc.md b/src/current/v24.1/build-a-spring-app-with-cockroachdb-jdbc.md index c824d0de668..f7c969f9d81 100644 --- a/src/current/v24.1/build-a-spring-app-with-cockroachdb-jdbc.md +++ b/src/current/v24.1/build-a-spring-app-with-cockroachdb-jdbc.md @@ -11,6 +11,10 @@ docs_area: develop This tutorial shows you how to build a [Spring Boot](https://spring.io/projects/spring-boot) web application with CockroachDB, using the [Spring Data JDBC](https://spring.io/projects/spring-data-jdbc) module for data access. The code for the example application is available for download from [GitHub](https://github.com/cockroachlabs/roach-data/tree/master), along with identical examples that use [JPA](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jpa), [jOOQ](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-jooq), and [MyBatis](https://github.com/cockroachlabs/roach-data/tree/master/roach-data-mybatis) for data access. +{{site.data.alerts.callout_info}} +This tutorial assumes you are running under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation. Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + ## Step 1. Start CockroachDB Choose whether to run a local cluster or a free CockroachDB {{ site.data.products.cloud }} cluster. @@ -784,6 +788,10 @@ On verifying that the transaction is active (using `TransactionSynchronizationMa Transactions may require retries if they experience deadlock or [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) that cannot be resolved without allowing [serialization]({% link {{ page.version.version }}/demo-serializable.md %}) anomalies. To handle transactions that are aborted due to transient serialization errors, we highly recommend writing [client-side transaction retry logic]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) into applications written on CockroachDB. +{{site.data.alerts.callout_info}} +Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + In this application, transaction retry logic is written into the methods of the `RetryableTransactionAspect` class. This class is declared an aspect with the `@Aspect` annotation. The `@Order` annotation on this aspect class is passed `Ordered.LOWEST_PRECEDENCE-2`, a level of precedence above the primary transaction advisor. This indicates that the transaction retry advisor must run outside the context of a transaction. Here are the contents of [`RetryableTransactionAspect.java`](https://github.com/cockroachlabs/roach-data/blob/master/roach-data-jdbc/src/main/java/io/roach/data/jdbc/RetryableTransactionAspect.java): {% include_cached copy-clipboard.html %} diff --git a/src/current/v24.1/demo-serializable.md b/src/current/v24.1/demo-serializable.md index e5a9ad306e9..8bcbc44ca82 100644 --- a/src/current/v24.1/demo-serializable.md +++ b/src/current/v24.1/demo-serializable.md @@ -545,5 +545,5 @@ Explore other core CockroachDB benefits and features: You might also want to learn more about how transactions work in CockroachDB and in general: - [Transactions Overview]({% link {{ page.version.version }}/transactions.md %}) -- [Real Transactions are Serializable](https://www.cockroachlabs.com/blog/acid-rain/) - [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/) +- [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}) \ No newline at end of file diff --git a/src/current/v24.1/developer-basics.md b/src/current/v24.1/developer-basics.md index 3c48e82fccf..795e8e3a1d5 100644 --- a/src/current/v24.1/developer-basics.md +++ b/src/current/v24.1/developer-basics.md @@ -22,17 +22,23 @@ Managing transactions is an important part of CockroachDB application developmen #### Serializability and transaction contention -By default, CockroachDB uses [`SERIALIZABLE`](https://wikipedia.org/wiki/Serializability) transaction [isolation](https://wikipedia.org/wiki/Isolation_(database_systems)) (the "I" of ACID semantics). If transactions are executed concurrently, the final state of the database will appear as if the transactions were executed serially. `SERIALIZABLE` isolation, the strictest level of isolation, provides the highest level of data consistency and protects against concurrency-based attacks and bugs. +By default, CockroachDB uses [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) transaction [isolation](https://wikipedia.org/wiki/Isolation_(database_systems)) (the "I" of ACID semantics). If transactions are executed concurrently, the final state of the database will appear as if the transactions were executed serially. `SERIALIZABLE` isolation, the strictest level of isolation, provides the highest level of data consistency and protects against concurrency anomalies. -To guarantee `SERIALIZABLE` isolation, CockroachDB [locks]({% link {{ page.version.version }}/crdb-internal.md %}#cluster_locks) the data targeted by an open transaction. If a separate transaction attempts to modify data that are locked by an open transaction, the newest transaction will not succeed, as committing it could result in a violation of the `SERIALIZABLE` isolation level. This scenario is called *transaction contention*, and should be avoided when possible. For a more detailed explanation of transaction contention, and tips on how to avoid it, see [Transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). +To guarantee data consistency, CockroachDB [locks]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#concurrency-control) the data targeted by an open transaction. If a separate transaction attempts to modify data that are locked by an open transaction, the newest transaction will not succeed, as committing it could result in incorrect data. This scenario is called *transaction contention*, and should be avoided when possible. For a more detailed explanation of transaction contention, and tips on how to avoid it, see [Transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). #### Transaction retries -In some cases, [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) is unavoidable. If a transaction fails due to contention, CockroachDB will automatically retry the transaction, or will return a [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) to the client. Most [official CockroachDB client libraries]({% link {{ page.version.version }}/install-client-drivers.md %}) include a transaction-retrying wrapper function to make writing your persistence layer easier. If your framework's client library does not include a retry wrapper, you will need to write transaction retry logic in your application. We go into more detail about transaction retries later in the guide, in [Retry Transactions]({% link {{ page.version.version }}/advanced-client-side-transaction-retries.md %}). +In some cases, [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) is unavoidable. If a transaction fails due to contention, CockroachDB will automatically retry the transaction, or will return a [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) to the client. + +Most [official CockroachDB client libraries]({% link {{ page.version.version }}/install-client-drivers.md %}) include a transaction-retrying wrapper function to make writing your persistence layer easier. If your framework's client library does not include a retry wrapper, you will need to write transaction retry logic in your application. We go into more detail about transaction retries later in the guide, in [Retry Transactions]({% link {{ page.version.version }}/advanced-client-side-transaction-retries.md %}). + +{{site.data.alerts.callout_info}} +Client-side retry handling is **not** necessary under [`READ COMMITTED`](#read-committed-isolation) isolation. +{{site.data.alerts.end}} #### Read Committed isolation -CockroachDB can be configured to execute transactions at [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) instead of `SERIALIZABLE` isolation. `READ COMMITTED` permits some concurrency anomalies in exchange for minimizing transaction aborts and [retries]({% link {{ page.version.version }}/developer-basics.md %}#transaction-retries). Depending on your workload requirements, this may be desirable. For more information, see [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}). +CockroachDB can be configured to execute transactions at [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) instead of `SERIALIZABLE` isolation. `READ COMMITTED` permits some concurrency anomalies in exchange for minimizing transaction aborts and removing the need for client-side retries. Depending on your workload requirements, this may be desirable. For more information, see [Read Committed Transactions]({% link {{ page.version.version }}/read-committed.md %}). ### How applications interact with CockroachDB diff --git a/src/current/v24.1/frequently-asked-questions.md b/src/current/v24.1/frequently-asked-questions.md index 8e373452d0c..d4da65508cf 100644 --- a/src/current/v24.1/frequently-asked-questions.md +++ b/src/current/v24.1/frequently-asked-questions.md @@ -76,17 +76,15 @@ In a CockroachDB cluster spread across multiple geographic regions, the round-tr For short-term failures, such as a server restart, CockroachDB uses Raft to continue seamlessly as long as a majority of replicas remain available. Raft makes sure that a new "leader" for each group of replicas is elected if the former leader fails, so that transactions can continue and affected replicas can rejoin their group once they’re back online. For longer-term failures, such as a server/rack going down for an extended period of time or a datacenter outage, CockroachDB automatically rebalances replicas from the missing nodes, using the unaffected replicas as sources. Using capacity information from the gossip network, new locations in the cluster are identified and the missing replicas are re-replicated in a distributed fashion using all available nodes and the aggregate disk and network bandwidth of the cluster. -### How is CockroachDB strongly-consistent? +### How is CockroachDB strongly consistent? -CockroachDB guarantees [serializable SQL transactions]({% link {{ page.version.version }}/demo-serializable.md %}), the highest isolation level defined by the SQL standard. It does so by combining the Raft consensus algorithm for writes and a custom time-based synchronization algorithms for reads. - -- Stored data is versioned with MVCC, so [reads simply limit their scope to the data visible at the time the read transaction started]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#time-and-hybrid-logical-clocks). +By default, CockroachDB guarantees [`SERIALIZABLE` SQL transactions]({% link {{ page.version.version }}/demo-serializable.md %}), the highest isolation level defined by the SQL standard. It does so by combining the Raft consensus algorithm for writes and a custom time-based synchronization algorithm for reads. - Writes are serviced using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. A consensus algorithm guarantees that any majority of replicas together always agree on whether an update was committed successfully. Updates (writes) must reach a majority of replicas (2 out of 3 by default) before they are considered committed. - To ensure that a write transaction does not interfere with read transactions that start after it, CockroachDB also uses a [timestamp cache]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache) which remembers when data was last read by ongoing transactions. +- Stored data is versioned with [MVCC]({% link {{ page.version.version }}/architecture/storage-layer.md %}#mvcc), so under `SERIALIZABLE` isolation, [reads simply limit their scope to the data visible at the time the read transaction started]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#time-and-hybrid-logical-clocks). - This ensures that clients always observe serializable consistency with regards to other concurrent transactions. +To ensure that a write transaction does not interfere with read transactions that start after it, CockroachDB also uses a [timestamp cache]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache) which remembers when data was last read by an ongoing transaction. This ensures that clients can always observe `SERIALIZABLE` consistency while issuing multiple concurrent transactions. ### How is CockroachDB both highly available and strongly consistent? diff --git a/src/current/v24.1/migrate-from-oracle.md b/src/current/v24.1/migrate-from-oracle.md index 7606c9d9521..fe0dc71455d 100644 --- a/src/current/v24.1/migrate-from-oracle.md +++ b/src/current/v24.1/migrate-from-oracle.md @@ -325,11 +325,11 @@ The last phase of the migration process is to change the [transactional behavior ### Transactions, locking, and concurrency control -Both Oracle and CockroachDB support [multi-statement transactions]({% link {{ page.version.version }}/transactions.md %}), which are atomic and guarantee ACID semantics. However, CockroachDB operates in a serializable isolation mode while Oracle defaults to read committed, which can create both non-repeatable reads and phantom reads when a transaction reads data twice. It is typical that Oracle developers will use `SELECT FOR UPDATE` to work around read committed issues. The [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) statement is also supported in CockroachDB. +Both Oracle and CockroachDB support [multi-statement transactions]({% link {{ page.version.version }}/transactions.md %}), which are atomic and guarantee ACID semantics. However, CockroachDB operates under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation by default, while Oracle defaults to [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}), which can create both [non-repeatable reads and phantom reads]({% link {{ page.version.version }}/read-committed.md %}#non-repeatable-reads-and-phantom-reads) when a transaction reads data twice. It is typical that Oracle developers will use `SELECT FOR UPDATE` to work around `READ COMMITTED` concurrency anomalies. Both the [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation level and the [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) statement are supported in CockroachDB. Regarding locks, Cockroach utilizes a [lightweight latch]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#latch-manager) to serialize access to common keys across concurrent transactions. Oracle and CockroachDB transaction control flows only have a few minor differences; for more details, refer to [Transactions - SQL statements]({% link {{ page.version.version }}/transactions.md %}#sql-statements). -As CockroachDB does not allow serializable anomalies, [transactions]({% link {{ page.version.version }}/begin-transaction.md %}) may experience deadlocks or [read/write contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). This is expected during concurrency on the same keys. These can be addressed with either [automatic retries]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) or [client-side transaction retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). +Because CockroachDB does not allow serializable anomalies under `SERIALIZABLE` isolation, [transactions]({% link {{ page.version.version }}/begin-transaction.md %}) may experience deadlocks or [read/write contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). This is expected during concurrency on the same keys. These can be addressed with either [automatic retries]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) or [client-side transaction retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). ### SQL dialect diff --git a/src/current/v24.1/movr-flask-application.md b/src/current/v24.1/movr-flask-application.md index 34ed44f87a0..1cc432ceae8 100644 --- a/src/current/v24.1/movr-flask-application.md +++ b/src/current/v24.1/movr-flask-application.md @@ -9,6 +9,10 @@ This page guides you through developing a globally-available web application. It {% comment %} {% include {{ page.version.version }}/misc/movr-live-demo.md %} {% endcomment %} +{{site.data.alerts.callout_info}} +This tutorial assumes you are running under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation. Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + ## Before you begin Before you begin this section, complete the previous section of the tutorial, [Set Up a Virtual Environment for Developing Global Applications]({% link {{ page.version.version }}/movr-flask-setup.md %}). diff --git a/src/current/v24.1/performance-best-practices-overview.md b/src/current/v24.1/performance-best-practices-overview.md index adefe963de7..531200da56c 100644 --- a/src/current/v24.1/performance-best-practices-overview.md +++ b/src/current/v24.1/performance-best-practices-overview.md @@ -338,15 +338,15 @@ When the `transaction_rows_read_err` [session setting]({% link {{ page.version.v - They operate on table rows with the same index key values (either on [primary keys]({% link {{ page.version.version }}/primary-key.md %}) or secondary [indexes]({% link {{ page.version.version }}/indexes.md %})). - At least one of the transactions holds a [write intent]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#write-intents) or exclusive [locking read]({% link {{ page.version.version }}/select-for-update.md %}#lock-strengths) on the data. -By default under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation, transactions that operate on the same index key values (specifically, that operate on the same [column family]({% link {{ page.version.version }}/column-families.md %}) for a given index key) are strictly serialized to obey transaction isolation semantics. To maintain this isolation, writing transactions ["lock" rows]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#writing) to prevent interactions with concurrent transactions. +Writing transactions ["lock" rows]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#writing) to prevent interactions with concurrent transactions. Locking reads issued with [`SELECT ... FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) perform a similar function by placing an [*exclusive lock*]({% link {{ page.version.version }}/select-for-update.md %}#lock-strengths) on rows, which can cause contention. -Locking reads issued with [`SELECT ... FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) perform a similar function by placing an [*exclusive lock*]({% link {{ page.version.version }}/select-for-update.md %}#lock-strengths) on rows, which can cause contention for both `SERIALIZABLE` and [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transactions. +By default under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation, transactions that operate on the same index key values (specifically, that operate on the same [column family]({% link {{ page.version.version }}/column-families.md %}) for a given index key) are strictly serialized. To maintain this isolation, `SERIALIZABLE` transactions [refresh their reads]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at commit time to verify that the values they read were not subsequently updated by other, concurrent transactions. If read refreshing is unsuccessful, then the transaction must be retried. [When transactions are experiencing contention]({% link {{ page.version.version }}/performance-recipes.md %}#indicators-that-your-application-is-experiencing-transaction-contention), you may observe: - [Delays in query completion]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries). This occurs when multiple transactions are trying to write to the same "locked" data at the same time, making a transaction unable to complete. This is also known as *lock contention*. -- [Transaction retries]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) performed automatically by CockroachDB. This occurs if a transaction cannot be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently-executing transactions. This is also called a *serializability conflict*. -- [Transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}), which are emitted to your client when an automatic retry is not possible or fails. Your application must address transaction retry errors with [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). +- [Transaction retries]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) performed automatically by CockroachDB. This occurs if a transaction cannot be placed into a serializable ordering among all of the currently-executing transactions. This is also called a *serialization conflict*. +- [Transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}), which are emitted to your client when an automatic retry is not possible or fails. Under `SERIALIZABLE` isolation, your application must address transaction retry errors with [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). - [Cluster hot spots](#hot-spots). To mitigate these effects, [reduce the causes of transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#reduce-transaction-contention) and [reduce hot spots](#reduce-hot-spots). For further background on transaction contention, see [What is Database Contention, and Why Should You Care?](https://www.cockroachlabs.com/blog/what-is-database-contention/). diff --git a/src/current/v24.1/performance-recipes.md b/src/current/v24.1/performance-recipes.md index 277a5cf2427..34364d6924f 100644 --- a/src/current/v24.1/performance-recipes.md +++ b/src/current/v24.1/performance-recipes.md @@ -87,7 +87,7 @@ This section provides solutions for common performance issues in your applicatio [Transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention) is a state of conflict that occurs when: - A [transaction]({% link {{ page.version.version }}/transactions.md %}) is unable to complete due to another concurrent or recent transaction attempting to write to the same data. This is also called *lock contention*. -- A transaction is [automatically retried]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) because it could not be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently-executing transactions. If the automatic retry is not possible or fails, a [*transaction retry error*]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) is emitted to the client, requiring the client application to [retry the transaction]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). This is also called a *serialization conflict*, or an *isolation conflict*. +- A transaction is [automatically retried]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) because it could not be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently-executing transactions. If the automatic retry is not possible or fails, a [*transaction retry error*]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) is emitted to the client, requiring a client application running under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation to [retry the transaction]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling). This is also called a *serialization conflict*, or an *isolation conflict*. #### Indicators that your application is experiencing transaction contention @@ -122,7 +122,7 @@ If lock contention occurred in the past, you can [identify the transactions and These are indicators that a transaction has failed due to [contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). -- A [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) with `SQLSTATE: 40001`, the string [`restart transaction`]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction), and an error code such as [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`RETRY_SERIALIZABLE`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_serializable), is emitted to the client. +- A [transaction retry error]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) with `SQLSTATE: 40001`, the string [`restart transaction`]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction), and an error code such as [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`RETRY_SERIALIZABLE`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_serializable), is emitted to the client. These errors are typically seen under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) and not [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. - Querying the [`crdb_internal.transaction_contention_events`]({% link {{ page.version.version }}/crdb-internal.md %}#transaction_contention_events) table `WHERE contention_type='SERIALIZATION_CONFLICT'` indicates that your transactions have experienced serialization conflicts. - This is also shown in the **Transaction Executions** view on the **Insights** page ([CockroachDB {{ site.data.products.cloud }} Console](https://www.cockroachlabs.com/docs/cockroachcloud/insights-page#transaction-executions-view) and [DB Console]({% link {{ page.version.version }}/ui-insights-page.md %}#transaction-executions-view)). Transaction executions will display the [**Failed Execution** insight due to a serialization conflict]({% link {{ page.version.version }}/ui-insights-page.md %}#serialization-conflict-due-to-transaction-contention). @@ -136,7 +136,7 @@ These are indicators that transaction retries occurred in the past: Identify the transactions that are in conflict, and unblock them if possible. In general, take steps to [reduce transaction contention](#reduce-transaction-contention). -In addition, implement [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) so that your application can respond to [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that are emitted when CockroachDB cannot [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a transaction. +When running under `SERIALIZABLE` isolation, implement [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) so that your application can respond to [transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) that are emitted when CockroachDB cannot [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a transaction. ##### Identify conflicting transactions diff --git a/src/current/v24.1/read-committed.md b/src/current/v24.1/read-committed.md index c7ef15c200b..c90dd3c0ba0 100644 --- a/src/current/v24.1/read-committed.md +++ b/src/current/v24.1/read-committed.md @@ -5,9 +5,7 @@ toc: true docs_area: deploy --- -{{site.data.alerts.callout_info}} -{% include feature-phases/preview.md %} -{{site.data.alerts.end}} +{% include enterprise-feature.md %} `READ COMMITTED` is one of two [transaction isolation levels](https://wikipedia.org/wiki/Isolation_(database_systems)) supported on CockroachDB. By default, CockroachDB uses the [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation level, which is the strongest [ANSI transaction isolation level](https://wikipedia.org/wiki/Isolation_(database_systems)#Isolation_levels). @@ -27,17 +25,10 @@ If your workload is already running well under `SERIALIZABLE` isolation, Cockroa ## Enable `READ COMMITTED` isolation -To make `READ COMMITTED` isolation available to use on a cluster, enable the following [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) in the SQL shell: +By default, the `sql.txn.read_committed_isolation.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) is `true`, enabling `READ COMMITTED` transactions. If the cluster setting is `false`, `READ COMMITTED` transactions will run as `SERIALIZABLE`. -{% include_cached copy-clipboard.html %} -~~~ sql -SET CLUSTER SETTING sql.txn.read_committed_isolation.enabled = 'true'; -~~~ - -After you enable the cluster setting, you can set `READ COMMITTED` as the [default isolation level](#set-the-default-isolation-level-to-read-committed) or [begin a transaction](#set-the-current-transaction-to-read-committed) as `READ COMMITTED`. - -{{site.data.alerts.callout_info}} -If the cluster setting is not enabled, `READ COMMITTED` transactions will run as `SERIALIZABLE`. +{{site.data.alerts.callout_success}} +To check whether any transactions are being upgraded to `SERIALIZABLE`, see the [**Upgrades of SQL Transaction Isolation Level**]({% link {{ page.version.version }}/ui-sql-dashboard.md %}#upgrades-of-sql-transaction-isolation-level) graph in the DB Console. {{site.data.alerts.end}} ### Set the default isolation level to `READ COMMITTED` @@ -154,17 +145,16 @@ Starting a transaction as `READ COMMITTED` does not affect the [default isolatio - You can mitigate concurrency anomalies by issuing [locking reads](#locking-reads) in `READ COMMITTED` transactions. These statements can block concurrent transactions that are issuing writes or other locking reads on the same rows. -{% comment %} -- When using `READ COMMITTED` isolation, you do not need to implement [client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) to handle `40001` errors. This is because under [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention), `READ COMMITTED` transactions will **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) to applications. -{% endcomment %} +- When using `READ COMMITTED` isolation, you do **not** need to implement [client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) to handle [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) under [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). `READ COMMITTED` transactions never return [`RETRY_SERIALIZABLE`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_serializable) errors, and will only return `40001` errors in limited cases, as described in the following points. -`READ COMMITTED` transactions can abort in the following scenarios: + +`READ COMMITTED` transactions can abort in certain scenarios: - Transactions at all isolation levels are subject to [*lock contention*]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention), where a transaction attempts to lock a row that is already locked by a [write]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#write-intents) or [locking read](#locking-reads). In such cases, the later transaction is blocked until the earlier transaction commits or rolls back, thus releasing its lock on the row. Lock contention that produces a *deadlock* between two transactions will result in a transaction abort and a `40001` error ([`ABORT_REASON_ABORTED_RECORD_FOUND`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_aborted_record_found) or [`ABORT_REASON_PUSHER_ABORTED`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_pusher_aborted)) returned to the client. - [Constraint]({% link {{ page.version.version }}/constraints.md %}) violations will abort transactions at all isolation levels. -- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error will be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting or the [`results_buffer_size`]({% link {{ page.version.version }}/session-variables.md %}#results-buffer-size) session variable. +- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error can be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting or the [`results_buffer_size`]({% link {{ page.version.version }}/session-variables.md %}#results-buffer-size) session variable. ### Concurrency anomalies @@ -408,7 +398,7 @@ To use locking reads: - If you need to read and later update a row within a transaction, use `SELECT ... FOR UPDATE` to acquire an exclusive lock on the row. This guarantees data integrity between the transaction's read and write operations. -- If you need to read the latest version of a row, but not update the row, use `SELECT ... FOR SHARE` to block all concurrent writes on the row without unnecessarily blocking concurrent reads. +- If you need to read the latest version of a row, and later update a **different** row within a transaction, use `SELECT ... FOR SHARE` to acquire a shared lock on the row. This blocks all concurrent writes on the row without unnecessarily blocking concurrent reads or other `SELECT ... FOR SHARE` queries. {{site.data.alerts.callout_success}} This allows an application to build cross-row consistency constraints by ensuring that rows that are read in a `READ COMMITTED` transaction will not change before the writes in the same transaction have been committed. diff --git a/src/current/v24.1/select-for-update.md b/src/current/v24.1/select-for-update.md index 61e2ff8e2ab..c7cf320d1eb 100644 --- a/src/current/v24.1/select-for-update.md +++ b/src/current/v24.1/select-for-update.md @@ -61,7 +61,15 @@ Under `READ COMMITTED` isolation, CockroachDB uses the `SELECT ... FOR SHARE` lo Shared locks are not enabled by default for `SERIALIZABLE` transactions. To enable shared locks for `SERIALIZABLE` transactions, configure the [`enable_shared_locking_for_serializable` session setting]({% link {{ page.version.version }}/session-variables.md %}). To perform [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks under `SERIALIZABLE` isolation with shared locks, configure the [`enable_implicit_fk_locking_for_serializable` session setting]({% link {{ page.version.version }}/session-variables.md %}). This matches the default `READ COMMITTED` behavior. {{site.data.alerts.end}} -#### Lock behavior under `SERIALIZABLE` isolation +### Lock promotion + +A shared lock can be "promoted" to an exclusive lock. + +If a transaction that holds a shared lock on a row subsqeuently issues an exclusive lock on the row, this will cause the transaction to reacquire the lock, effectively "promoting" the shared lock to an exclusive lock. + +A shared lock cannot be promoted until all other shared locks on the row are released. If two concurrent transactions attempt to promote their shared locks on a row, this will cause *deadlock* between the transactions, causing one transaction to abort with a `40001` error ([`ABORT_REASON_ABORTED_RECORD_FOUND`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_aborted_record_found) or [`ABORT_REASON_PUSHER_ABORTED`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_pusher_aborted)) returned to the client. The remaining open transaction will then promote its lock. + +### Lock behavior under `SERIALIZABLE` isolation {% include {{page.version.version}}/known-limitations/select-for-update-limitations.md %} diff --git a/src/current/v24.1/transaction-retry-error-example.md b/src/current/v24.1/transaction-retry-error-example.md index ddbf114665c..52ebe582a88 100644 --- a/src/current/v24.1/transaction-retry-error-example.md +++ b/src/current/v24.1/transaction-retry-error-example.md @@ -9,6 +9,10 @@ When a [transaction]({% link {{ page.version.version }}/transactions.md %}) is u This page presents an [example of an application's transaction retry logic](#client-side-retry-handling-example), as well as a manner by which that logic can be [tested and verified](#test-transaction-retry-logic) against your application's needs. +{{site.data.alerts.callout_info}} +Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + ## Client-side retry handling example The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic]({% link {{ page.version.version }}/advanced-client-side-transaction-retries.md %}), so it can be used from any programming language or environment. In particular, your retry loop must: diff --git a/src/current/v24.1/transaction-retry-error-reference.md b/src/current/v24.1/transaction-retry-error-reference.md index 90682a5a897..f7abfe1925e 100644 --- a/src/current/v24.1/transaction-retry-error-reference.md +++ b/src/current/v24.1/transaction-retry-error-reference.md @@ -9,13 +9,12 @@ When a [transaction]({% link {{ page.version.version }}/transactions.md %}) is u Transaction retry errors fall into two categories: -- **Serialization Errors** indicate that a transaction failed because it could not be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently-executing transactions, as required under the default `SERIALIZABLE` isolation level. These errors are generally addressed with client-side intervention, where the client [initiates a restart of the transaction](#client-side-retry-handling), and [adjusts application logic and tunes queries](#minimize-transaction-retry-errors) for greater performance. +- *Serialization errors* indicate that a transaction failed because it could not be placed into a serializable ordering among all of the currently-executing transactions. - {{site.data.alerts.callout_info}} - Under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation, serializable errors will usually **not** be returned to the client. If a `READ COMMITTED` transaction conflicts with another transaction such that a serialization error ([`RETRY_WRITE_TOO_OLD`](#retry_write_too_old) or [`ReadWithinUncertaintyInterval`](#readwithinuncertaintyintervalerror)) is generated, the transaction attempts to resolve the error by internally retrying the latest conflicting statement at a new [read timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-snapshots). - {{site.data.alerts.end}} + - Under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation, these errors are generally addressed with client-side intervention, where the client [initiates a restart of the transaction](#client-side-retry-handling), and [adjusts application logic and tunes queries](#minimize-transaction-retry-errors) for greater performance. + - Under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation, serialization errors occur in [limited cases]({% link {{ page.version.version }}/read-committed.md %}#read-committed-abort) and can be avoided through workload changes or by increasing the [result buffer size](#result-buffer-size). -- **Internal State Errors** indicate that the cluster itself is experiencing an issue, such as being [overloaded]({% link {{ page.version.version }}/ui-overload-dashboard.md %}), which prevents the transaction from completing. These errors generally require both cluster-side and client-side intervention, where an operator addresses an issue with the cluster before the client then [initiates a restart of the transaction](#client-side-retry-handling). +- *Internal state errors* indicate that the cluster itself is experiencing an issue, such as being [overloaded]({% link {{ page.version.version }}/ui-overload-dashboard.md %}), which prevents the transaction from completing. These errors generally require both cluster-side and client-side intervention, where an operator addresses an issue with the cluster before the client then [initiates a restart of the transaction](#client-side-retry-handling). All transaction retry errors use the `SQLSTATE` error code `40001`, and emit error messages with the string [`restart transaction`]({% link {{ page.version.version }}/common-errors.md %}#restart-transaction). Further, each error includes a [specific error code](#transaction-retry-error-reference) to assist with targeted troubleshooting. @@ -27,7 +26,7 @@ At the default `SERIALIZABLE` isolation level, CockroachDB always attempts to fi Whenever possible, CockroachDB will [auto-retry a transaction internally]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) without notifying the client. CockroachDB will only send a serialization error to the client when it cannot resolve the error automatically without client-side intervention. -The main reason why CockroachDB cannot auto-retry every serialization error without sending an error to the client is that the SQL language is "conversational" by design. The client can send arbitrary statements to the server during a transaction, receive some results, and then decide to issue other arbitrary statements inside the same transaction based on the server's response. +[`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transactions can transparently resolve serialization errors by [retrying individual statements]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-snapshots) rather than entire transactions. [Client-side retry handling](#client-side-retry-handling) is therefore **not** necessary under `READ COMMITTED` isolation. ## Actions to take @@ -35,7 +34,11 @@ The main reason why CockroachDB cannot auto-retry every serialization error with ### Client-side retry handling -Your application should include client-side retry handling when the statements are sent individually, such as: +{{site.data.alerts.callout_info}} +Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. +{{site.data.alerts.end}} + +When running under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation, your application should include client-side retry handling when the statements are sent individually, such as: ~~~ > BEGIN; @@ -47,7 +50,7 @@ Your application should include client-side retry handling when the statements a > COMMIT; ~~~ -To indicate that a transaction must be retried, CockroachDB signals an error with the `SQLSTATE` error code `40001` (serialization error) and an error message that begins with the string `"restart transaction"`. +To indicate that a transaction must be retried, CockroachDB signals a serialization error with the `SQLSTATE` error code `40001` and an error message that begins with the string `"restart transaction"`. To handle these types of errors, you have the following options: @@ -105,17 +108,21 @@ TransactionRetryWithProtoRefreshError: ... RETRY_WRITE_TOO_OLD ... **Description:** -The `RETRY_WRITE_TOO_OLD` error occurs when a transaction _A_ tries to write to a row _R_, but another transaction _B_ that was supposed to be serialized after _A_ (i.e., had been assigned a higher timestamp), has already written to that row _R_, and has already committed. This is a common error when you have too much contention in your workload. +The `RETRY_WRITE_TOO_OLD` error occurs when a transaction _A_ tries to write to a row _R_, but another transaction _B_ that was supposed to be serialized after _A_ (i.e., had been assigned a higher timestamp), has already written to that row _R_, and has already committed. Under `SERIALIZABLE` isolation, this is a common error when you have too much contention in your workload. **Action:** +Under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation: + 1. Retry transaction _A_ as described in [client-side retry handling](#client-side-retry-handling). 1. Adjust your application logic as described in [minimize transaction retry errors](#minimize-transaction-retry-errors). In particular, try to: 1. Send all of the statements in your transaction in a [single batch]({% link {{ page.version.version }}/transactions.md %}#batched-statements). 1. Use [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) to aggressively lock rows that will later be updated in the transaction. -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +Under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation: + +1. `RETRY_WRITE_TOO_OLD` errors are only returned in rare cases that can be avoided by adjusting the [result buffer size](#result-buffer-size). ### RETRY_SERIALIZABLE @@ -135,7 +142,11 @@ HINT: See: https://www.cockroachlabs.com/docs/v23.2/transaction-retry-error-refe **Description:** -At a high level, the `RETRY_SERIALIZABLE` error occurs when a transaction's timestamp is moved forward, but the transaction performed reads at the old timestamp that are no longer valid at its new timestamp. More specifically, the `RETRY_SERIALIZABLE` error occurs in the following three cases: +{{site.data.alerts.callout_success}} +[`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transactions do **not** produce `RETRY_SERIALIZABLE` errors. +{{site.data.alerts.end}} + +At a high level, the `RETRY_SERIALIZABLE` error occurs when a transaction's timestamp is moved forward, but the transaction performed reads at the old timestamp that are no longer valid at its new timestamp. More specifically, the `RETRY_SERIALIZABLE` error occurs in the following three cases under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation: 1. When a transaction _A_ has its timestamp moved forward (also known as _A_ being "pushed") as CockroachDB attempts to find a serializable transaction ordering. Specifically, transaction _A_ tried to write a key that transaction _B_ had already read, and _B_ was supposed to be serialized after _A_ (i.e., _B_ had a higher timestamp than _A_). CockroachDB will try to serialize _A_ after _B_ by changing _A_'s timestamp, but it cannot do that when another transaction has subsequently written to some of the keys that _A_ has read and returned to the client. When that happens, the `RETRY_SERIALIZATION` error is signalled. For more information about how timestamp pushes work in our transaction model, see the [architecture docs on the transaction layer's timestamp cache]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). @@ -146,21 +157,21 @@ At a high level, the `RETRY_SERIALIZABLE` error occurs when a transaction's time *Failed preemptive refresh* -In the three above cases, CockroachDB will try to validate whether the read-set of the transaction that had its timestamp (`timestamp1`) pushed is still valid at the new timestamp (`timestamp3`) at commit time. This mechanism is called "performing a [read refresh]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing)". If the read-set is still valid, the transaction can commit. If it is not valid, the transaction will get a `RETRY_SERIALIZABLE - failed preemptive refresh` error. The refresh can fail for two reasons: +In the three preceding cases, CockroachDB will try to validate whether the read-set of the transaction that had its timestamp (`timestamp1`) pushed is still valid at the new timestamp (`timestamp3`) at commit time. This mechanism is called "performing a [read refresh]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing)". If the read-set is still valid, the transaction can commit. If it is not valid, the transaction will get a `RETRY_SERIALIZABLE - failed preemptive refresh` error. The refresh can fail for two reasons: 1. There is a committed value on a key that was read by the transaction at `timestamp2` (where `timestamp2` occurs between `timestamp1` and `timestamp3`). The error message will contain `due to encountered recently written committed value`. CockroachDB does not have any information about which conflicting transaction wrote to this key. 1. There is an intent on a key that was read by the transaction at `timestamp2` (where `timestamp2` occurs between `timestamp1` and `timestamp3`). The error message will contain `due to conflicting locks`. CockroachDB does have information about the conflicting transaction to which the intent belongs. The information about the [conflicting transaction]({% link {{ page.version.version }}/ui-insights-page.md %}#serialization-conflict-due-to-transaction-contention) can be seen on the [DB Console Insights page]({% link {{ page.version.version }}/ui-insights-page.md %}). **Action:** +Under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation: + 1. Retry transaction _A_ as described in [client-side retry handling](#client-side-retry-handling). 1. Adjust your application logic as described in [minimize transaction retry errors](#minimize-transaction-retry-errors). In particular, try to: 1. Send all of the statements in your transaction in a [single batch]({% link {{ page.version.version }}/transactions.md %}#batched-statements). 1. Use historical reads with [`SELECT ... AS OF SYSTEM TIME`]({% link {{ page.version.version }}/as-of-system-time.md %}). 1. Use [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}) to aggressively lock rows for the keys that were read and could not be refreshed. -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. - ### RETRY_ASYNC_WRITE_FAILURE ``` @@ -178,7 +189,7 @@ The `RETRY_ASYNC_WRITE_FAILURE` error occurs when some kind of problem with your 1. Retry the transaction as described in [client-side retry handling](#client-side-retry-handling). This is worth doing because the problem with the cluster is likely to be transient. 1. Investigate the problems with your cluster. For cluster troubleshooting information, see [Troubleshoot Cluster Setup]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}). -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ReadWithinUncertaintyIntervalError @@ -202,7 +213,7 @@ This behavior is non-deterministic: it depends on which node is the [leaseholder **Action:** -The solution is to do one of the following: +Under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation: 1. Be prepared to retry on uncertainty (and other) errors, as described in [client-side retry handling](#client-side-retry-handling). 1. Adjust your application logic as described in [minimize transaction retry errors](#minimize-transaction-retry-errors). In particular, try to: @@ -210,12 +221,14 @@ The solution is to do one of the following: 1. Use historical reads with [`SELECT ... AS OF SYSTEM TIME`]({% link {{ page.version.version }}/as-of-system-time.md %}). 1. If you [trust your clocks]({% link {{ page.version.version }}/operational-faqs.md %}#what-happens-when-node-clocks-are-not-properly-synchronized), you can try lowering the [`--max-offset` option to `cockroach start`]({% link {{ page.version.version }}/cockroach-start.md %}#flags), which provides an upper limit on how long a transaction can continue to restart due to uncertainty. +Under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation: + +1. `ReadWithinUncertaintyIntervalError` errors are only returned in rare cases that can be avoided by adjusting the [result buffer size](#result-buffer-size). + {{site.data.alerts.callout_info}} Uncertainty errors are a sign of transaction conflict. For more information about transaction conflicts, see [Transaction conflicts]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#transaction-conflicts). {{site.data.alerts.end}} -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. - ### RETRY_COMMIT_DEADLINE_EXCEEDED ``` @@ -249,7 +262,7 @@ This error occurs in the cases described below. If you increase the `kv.closed_timestamp.target_duration` setting, it means that you are increasing the amount of time by which the data available in [Follower Reads]({% link {{ page.version.version }}/follower-reads.md %}) and [CDC changefeeds]({% link {{ page.version.version }}/change-data-capture-overview.md %}) lags behind the current state of the cluster. In other words, there is a trade-off here: if you absolutely must execute long-running transactions that execute concurrently with other transactions that are writing to the same data, you may have to settle for longer delays on Follower Reads and/or CDC to avoid frequent serialization errors. The anomaly that would be exhibited if these transactions were not retried is called [write skew](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/). {{site.data.alerts.end}} -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ABORT_REASON_ABORTED_RECORD_FOUND @@ -282,7 +295,7 @@ If you are using [high- or low-priority transactions]({% link {{ page.version.ve 1. Retry the transaction as described in [client-side retry handling](#client-side-retry-handling) 1. Adjust your application logic as described in [minimize transaction retry errors](#minimize-transaction-retry-errors). -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ABORT_REASON_CLIENT_REJECT @@ -296,7 +309,7 @@ TransactionRetryWithProtoRefreshError:TransactionAbortedError(ABORT_REASON_CLIEN The `ABORT_REASON_CLIENT_REJECT` error is caused by the same conditions as the [`ABORT_REASON_ABORTED_RECORD_FOUND`](#abort_reason_aborted_record_found), and requires the same actions. The errors are fundamentally the same, except that they are discovered at different points in the process. -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ABORT_REASON_PUSHER_ABORTED @@ -310,7 +323,7 @@ TransactionRetryWithProtoRefreshError:TransactionAbortedError(ABORT_REASON_PUSHE The `ABORT_REASON_PUSHER_ABORTED` error is caused by the same conditions as the [`ABORT_REASON_ABORTED_RECORD_FOUND`](#abort_reason_aborted_record_found), and requires the same actions. The errors are fundamentally the same, except that they are discovered at different points in the process. -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ABORT_REASON_ABORT_SPAN @@ -324,7 +337,7 @@ TransactionRetryWithProtoRefreshError:TransactionAbortedError(ABORT_REASON_ABORT The `ABORT_REASON_ABORT_SPAN` error is caused by the same conditions as the [`ABORT_REASON_ABORTED_RECORD_FOUND`](#abort_reason_aborted_record_found), and requires the same actions. The errors are fundamentally the same, except that they are discovered at different points in the process. -See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommended remediations. +See [Minimize transaction retry errors](#minimize-transaction-retry-errors) for the full list of recommendations. ### ABORT_REASON_NEW_LEASE_PREVENTS_TXN diff --git a/src/current/v24.1/transactions.md b/src/current/v24.1/transactions.md index 5d382b6e705..81dd2e35914 100644 --- a/src/current/v24.1/transactions.md +++ b/src/current/v24.1/transactions.md @@ -57,7 +57,7 @@ To handle errors in transactions, you should check for the following types of se Type | Description -----|------------ -**Transaction Retry Errors** | Errors with the code `40001` and string `restart transaction`, which indicate that a transaction failed because it could not be placed in a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) of transactions by CockroachDB. For details on transaction retry errors and how to resolve them, see the [Transaction Retry Error Reference]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#actions-to-take). +**Transaction Retry Errors** | Errors with the code `40001` and string `restart transaction`, which indicate that a transaction failed because it could not be placed in a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) of transactions by CockroachDB. This occurs under [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) isolation and only rarely under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation. For details on transaction retry errors and how to resolve them, see the [Transaction Retry Error Reference]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#actions-to-take). **Ambiguous Errors** | Errors with the code `40003` which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. For information about how to handle ambiguous errors, see [here]({% link {{ page.version.version }}/common-errors.md %}#result-is-ambiguous). **SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the `UNIQUE` constraint generates a `23505` error. After encountering these errors, you can either issue a [`COMMIT`]({% link {{ page.version.version }}/commit-transaction.md %}) or [`ROLLBACK`]({% link {{ page.version.version }}/rollback-transaction.md %}) to abort the transaction and revert the database to its state before the transaction began.

If you want to attempt the same set of statements again, you must begin a completely new transaction. @@ -68,7 +68,7 @@ Transactions may require retries due to [contention]({% link {{ page.version.ver There are two cases in which transaction retries can occur: - [Automatic retries](#automatic-retries), which CockroachDB silently processes for you. -- [Client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling), which your application must handle after receiving a [*transaction retry error*]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}). +- [Client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling), which your application must handle after receiving a [*transaction retry error*]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) under `SERIALIZABLE` isolation. Client-side retry handling is not necessary for [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) transactions. To reduce the need for transaction retries, see [Reduce transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#reduce-transaction-contention). diff --git a/src/current/v24.1/ui-sql-dashboard.md b/src/current/v24.1/ui-sql-dashboard.md index da0f6f30fd7..8f258c9b342 100644 --- a/src/current/v24.1/ui-sql-dashboard.md +++ b/src/current/v24.1/ui-sql-dashboard.md @@ -33,6 +33,14 @@ The **SQL Connection Rate** is an average of the number of connection attempts p - In the cluster view, the graph shows the rate of SQL connection attempts to all nodes, with lines for each node. +## Upgrades of SQL Transaction Isolation Level + +- In the node view, the graph shows the total number of times a SQL transaction was upgraded to a stronger isolation level on the selected node. + +- In the cluster view, the graph shows the total number of times a SQL transaction was upgraded to a stronger isolation level across all nodes. + +If this metric is non-zero, then transactions at weaker isolation levels (such as [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %})) are being upgraded to [`SERIALIZABLE`]({% link {{ page.version.version }}/demo-serializable.md %}) instead. To ensure that `READ COMMITTED` transactions run as `READ COMMITTED`, see [Enable `READ COMMITTED` isolation]({% link {{ page.version.version }}/read-committed.md %}#enable-read-committed-isolation). + ## Open SQL Transactions - In the node view, the graph shows the total number of open SQL transactions on the node.