Skip to content

Commit

Permalink
read committed txns don't need client-side retry handling
Browse files Browse the repository at this point in the history
  • Loading branch information
taroface committed May 6, 2024
1 parent 34cf277 commit b446e83
Show file tree
Hide file tree
Showing 7 changed files with 50 additions and 32 deletions.
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
- [Send statements in transactions as a single batch]({% link {{ page.version.version }}/transactions.md %}#batched-statements). Batching allows CockroachDB to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a transaction when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a multi-statement transaction is not batched, and takes more than a single round trip, CockroachDB cannot automatically retry the transaction. For an example showing how to break up large transactions in an application, see [Break up large transactions into smaller units of work](build-a-python-app-with-cockroachdb-sqlalchemy.html#break-up-large-transactions-into-smaller-units-of-work).

<a id="result-buffer-size"></a>

- Limit the size of the result sets of your transactions to under 16KB, so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set over 16KB, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the `sql.defaults.results_buffer.size` [cluster setting](cluster-settings.html), or for a specific session using the `results_buffer_size` [session variable](set-vars.html).
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
In most cases, the correct actions to take when encountering transaction retry errors are:

1. Update your application to support [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) when transaction retry errors are encountered. Follow the guidance for the [specific error type]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#transaction-retry-error-reference).
1. Under `SERIALIZABLE` isolation, update your application to support [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) when transaction retry errors are encountered. Follow the guidance for the [specific error type]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#transaction-retry-error-reference).

1. Take steps to [minimize transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#minimize-transaction-retry-errors) in the first place. This means reducing transaction contention overall, and increasing the likelihood that CockroachDB can [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) a failed transaction.
2 changes: 1 addition & 1 deletion src/current/v24.1/performance-best-practices-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ Locking reads issued with [`SELECT ... FOR UPDATE`]({% link {{ page.version.vers

- [Delays in query completion]({% link {{ page.version.version }}/query-behavior-troubleshooting.md %}#hanging-or-stuck-queries). This occurs when multiple transactions are trying to write to the same "locked" data at the same time, making a transaction unable to complete. This is also known as *lock contention*.
- [Transaction retries]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) performed automatically by CockroachDB. This occurs if a transaction cannot be placed into a [serializable ordering]({% link {{ page.version.version }}/demo-serializable.md %}) among all of the currently-executing transactions. This is also called a *serializability conflict*.
- [Transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}), which are emitted to your client when an automatic retry is not possible or fails. Your application must address transaction retry errors with [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling).
- [Transaction retry errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}), which are emitted to your client when an automatic retry is not possible or fails. Under `SERIALIZABLE` isolation, your application must address transaction retry errors with [client-side retry handling]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling).
- [Cluster hot spots](#hot-spots).

To mitigate these effects, [reduce the causes of transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#reduce-transaction-contention) and [reduce hot spots](#reduce-hot-spots). For further background on transaction contention, see [What is Database Contention, and Why Should You Care?](https://www.cockroachlabs.com/blog/what-is-database-contention/).
Expand Down
9 changes: 4 additions & 5 deletions src/current/v24.1/read-committed.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,17 +152,16 @@ Starting a transaction as `READ COMMITTED` does not affect the [default isolatio

- You can mitigate concurrency anomalies by issuing [locking reads](#locking-reads) in `READ COMMITTED` transactions. These statements can block concurrent transactions that are issuing writes or other locking reads on the same rows.

{% comment %}
- When using `READ COMMITTED` isolation, you do not need to implement [client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) to handle `40001` errors. This is because under [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention), `READ COMMITTED` transactions will **not** return [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) to applications.
{% endcomment %}
- When using `READ COMMITTED` isolation, you do **not** need to implement [client-side retries]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#client-side-retry-handling) to handle [serialization errors]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}) under [transaction contention]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention). `READ COMMITTED` transactions never return [`RETRY_SERIALIZABLE`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_serializable) errors, and will only return `40001` errors in limited cases, as described in the following points.

`READ COMMITTED` transactions can abort in the following scenarios:
<a id="read-committed-abort"></a>
`READ COMMITTED` transactions can abort in certain scenarios:

- Transactions at all isolation levels are subject to [*lock contention*]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#transaction-contention), where a transaction attempts to lock a row that is already locked by a [write]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#write-intents) or [locking read](#locking-reads). In such cases, the later transaction is blocked until the earlier transaction commits or rolls back, thus releasing its lock on the row. Lock contention that produces a *deadlock* between two transactions will result in a transaction abort and a `40001` error ([`ABORT_REASON_ABORTED_RECORD_FOUND`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_aborted_record_found) or [`ABORT_REASON_PUSHER_ABORTED`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#abort_reason_pusher_aborted)) returned to the client.

- [Constraint]({% link {{ page.version.version }}/constraints.md %}) violations will abort transactions at all isolation levels.

- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error will be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting or the [`results_buffer_size`]({% link {{ page.version.version }}/session-variables.md %}#results-buffer-size) session variable.
- In rare cases under `READ COMMITTED` isolation, a [`RETRY_WRITE_TOO_OLD`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#retry_write_too_old) or [`ReadWithinUncertaintyIntervalError`]({% link {{ page.version.version }}/transaction-retry-error-reference.md %}#readwithinuncertaintyintervalerror) error can be returned to the client if a statement has already begun streaming a partial result set back to the client and cannot retry transparently. By default, the result set is buffered up to 16 KiB before overflowing and being streamed to the client. You can configure the result buffer size using the [`sql.defaults.results_buffer.size`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size) cluster setting or the [`results_buffer_size`]({% link {{ page.version.version }}/session-variables.md %}#results-buffer-size) session variable.

### Concurrency anomalies

Expand Down
4 changes: 4 additions & 0 deletions src/current/v24.1/transaction-retry-error-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@ When a [transaction]({% link {{ page.version.version }}/transactions.md %}) is u

This page presents an [example of an application's transaction retry logic](#client-side-retry-handling-example), as well as a manner by which that logic can be [tested and verified](#test-transaction-retry-logic) against your application's needs.

{{site.data.alerts.callout_info}}
Client-side retry handling is **not** necessary under [`READ COMMITTED`]({% link {{ page.version.version }}/read-committed.md %}) isolation.
{{site.data.alerts.end}}

## Client-side retry handling example

The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic]({% link {{ page.version.version }}/advanced-client-side-transaction-retries.md %}), so it can be used from any programming language or environment. In particular, your retry loop must:
Expand Down
Loading

0 comments on commit b446e83

Please sign in to comment.