Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@
- After importing into an existing table, [constraints]({% link {{ page.version.version }}/constraints.md %}) will be un-validated and need to be [re-validated]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint).
- Imported rows must not conflict with existing rows in the table or any unique secondary indexes.
- `IMPORT INTO` works for only a single existing table.
- When `IMPORT INTO` uses distributed merge, it stores intermediate SST files on participating SQL instances' local storage. If one of those SQL instances becomes unavailable during the merge phase, the job waits for that SQL instance to become available again. If the SQL instance does not become available again, the job fails with a permanent error. [#167491](https://github.com/cockroachdb/cockroach/issues/167491)
- `IMPORT INTO` can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to a value below your max disk write speed. For example, to set it to 10MB/s, execute:
{% include_cached copy-clipboard.html %}
~~~ sql
SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB';
~~~
~~~
Loading