-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental migrations #891
Conversation
c4e28fa
to
1588cdc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand the migration instructions correctly, we'd need to run the migrations manually rather than automatically? Would that mean disabling the k8s post-upgrade hook temporarily?
Suggestion: instead of starting SpiceDB with different flags, should that be different spicedb versions? ie something like:
- Upgrade to v1.14.0
- migrations ran by post-upgrade hook (migration 9 and 10 but without the backfilling in 10)
- 1.14.0 always does "write both read old"
- Manually run
spicedb run-job backfill-xid
run-job
is something I made up. Effectively a migration to be ran manually.
- Upgrade to v1.15.0
- Migration 11 ran by post-upgrade hook
- Upgrade to v1.16.0
- 1.16.0 always does "write both read new"
- Check that the v1.15.0 migrations have been ran, refuse to start otherwise
- Upgrade to v1.17.0
- Run migrations 12/13
- 1.16.0 always does "write new read new" ("old" doesn't exist anymore)
Maybe it can be consolidated to 3-phase rather than 4, I haven't overly thought about it
"github.com/rs/zerolog/log" | ||
) | ||
|
||
const batchSize = 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds very low :) Need to trade-off between execution speed and large table locks though, there's no right answer I don't think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just going to make it a command line parameter to the migrate
command, defaulting to 1000.
SET xid = id::text::xid8, snapshot = CONCAT(id, ':', id, ':')::pg_snapshot | ||
WHERE id IN ( | ||
SELECT id FROM relation_tuple_transaction | ||
WHERE snapshot IS NULL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This WHERE clause (and the 2 other ones) might get very slow and resource-intensive as there's no index on snapshot
.
You could create an index on snapshot
/created_xid
for the purpose of the migration, then clean it up at the end of the migration (or in a follow-up migration)?
|
||
var addXIDIndices = []string{ | ||
// Replace the indices that are inherent from having a primary key constraint | ||
"CREATE UNIQUE INDEX CONCURRENTLY ix_rttx_oldpk ON relation_tuple_transaction (id)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm understanding correctly this migration isn't ran in a transaction. There's going to be issues if this migration is interrupted while creating indices: it won't be possible to re-run it as PG will complain about indices already existing.
Could just add a IF NOT EXISTS
here
beddbac
to
9aff3ba
Compare
@williamdclt Updated, PTAL |
9aff3ba
to
ca8c50a
Compare
|
||
// MigrationVariable contains constants that can be used as context keys that might | ||
// be relevant in a number of different migration scenarios. | ||
type MigrationVariable int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to make these ints vs strings? I have concerns there might be accidental overlap with something we don't inject
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not possible, the equality operator the context uses considers the type:
https://go.dev/play/p/16BfmcC0d9X
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Migration instructions:
spicedb migrate add-xid-columns
--datastore-migration-phase write-both-read-old
spicedb migrate add-xid-constraints
- performs a backfill and may take a long time--datastore-migration-phase write-both-read-new
spicedb migrate drop-id-constraints
--datastore-migration-phase
flagspicedb migrate head