diff --git a/src/current/_includes/v26.2/known-limitations/import-into-limitations.md b/src/current/_includes/v26.2/known-limitations/import-into-limitations.md index 6a7cce6f727..6f2c04b65f1 100644 --- a/src/current/_includes/v26.2/known-limitations/import-into-limitations.md +++ b/src/current/_includes/v26.2/known-limitations/import-into-limitations.md @@ -4,8 +4,9 @@ - After importing into an existing table, [constraints]({% link {{ page.version.version }}/constraints.md %}) will be un-validated and need to be [re-validated]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint). - Imported rows must not conflict with existing rows in the table or any unique secondary indexes. - `IMPORT INTO` works for only a single existing table. +- When `IMPORT INTO` uses distributed merge, it stores intermediate SST files on participating SQL instances' local storage. If one of those SQL instances becomes unavailable during the merge phase, the job waits for that SQL instance to become available again. If the SQL instance does not become available again, the job fails with a permanent error. [#167491](https://github.com/cockroachdb/cockroach/issues/167491) - `IMPORT INTO` can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: {% include_cached copy-clipboard.html %} ~~~ sql SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; - ~~~ \ No newline at end of file + ~~~