Add known limitation: bulkmerge: distributed merge fails when a participating SQL instance goes down#23212
Conversation
…when a participating SQL instance goes down Fixes DOC-16673
✅ Deploy Preview for cockroachdb-api-docs canceled.
|
Files changed:
|
✅ Deploy Preview for cockroachdb-interactivetutorials-docs canceled.
|
✅ Netlify Preview
To edit notification comments on pull requests, go to your Netlify project configuration. |
| - After importing into an existing table, [constraints]({% link {{ page.version.version }}/constraints.md %}) will be un-validated and need to be [re-validated]({% link {{ page.version.version }}/alter-table.md %}#validate-constraint). | ||
| - Imported rows must not conflict with existing rows in the table or any unique secondary indexes. | ||
| - `IMPORT INTO` works for only a single existing table. | ||
| - When `IMPORT INTO` uses distributed merge, it stores intermediate SST files on participating SQL instances' local storage. If one of those SQL instances becomes unavailable during the merge phase, the job fails with a permanent error and must be restarted after that SQL instance becomes available again. |
There was a problem hiding this comment.
I believe the error is only permanent if the instance doesn't become available again, otherwise the job waits for the instance to come back up.
There was a problem hiding this comment.
thanks @mw5h - updated to
When
IMPORT INTOuses distributed merge, it stores intermediate SST files on participating SQL instances' local storage. If one of those SQL instances becomes unavailable during the merge phase, the job waits for that SQL instance to become available again. If the SQL instance does not become available again, the job fails with a permanent error.
PTAL!
|
ok, merging this so we have it in place for Monday's release. If the wording update still needs tweaks, happy to do more going forward! |
Fixes DOC-16673