Skip to content

Hosted Branching CLI rejects schema_migrations state that local CLI 2.95.2 accepts after migration repair --status applied #5127

@melihzafer

Description

@melihzafer

Summary

There is a behavioural divergence between local supabase db push --linked (CLI 2.95.2) and the hosted Supabase Branching CLI run by the Supabase GitHub App (slug supabase, client ID Iv1.b91a6d8eaa272168) on push to a production branch.

After fully canonicalizing supabase_migrations.schema_migrations on the remote project using supabase migration repair --linked --status applied <version> for every local migration version:

  • supabase migration list --linked shows every row with Local + Remote matched.
  • supabase db push --linked --dry-run prints Remote database is up to date.
  • The schema_migrations table holds 136 rows, each with version, name, and a non-null statements array (5–36 statements per row depending on migration).

…the hosted check still fails with:

ERROR: duplicate key value violates unique constraint "schema_migrations_pkey" (SQLSTATE 23505)
Key (version)=(001) already exists.
At statement: 0
INSERT INTO supabase_migrations.schema_migrations(version, name, statements) VALUES($1, $2, $3)

i.e. the hosted CLI is attempting to INSERT a tracking row for migration version 001 despite the canonical row already existing — it is not skipping already-applied migrations the way the local CLI does.

Reproduction

  1. Repo has 136 migration files in supabase/migrations/:
    • 81 in legacy format 001_*.sql081_*.sql
    • 55 in canonical timestamp format 20260301000000_*.sql20260364000000_*.sql
  2. On the remote project, run supabase migration repair --linked --status applied <every version>.
  3. Confirm locally:
    $ supabase migration list --linked
    # All rows show Local | Remote populated.
    $ supabase db push --linked --dry-run
    Remote database is up to date.
  4. Push any commit to the production branch on GitHub.
  5. The Supabase GitHub App's "Supabase Preview" check fails with the INSERT collision above.

Expected

Hosted CLI consults schema_migrations and skips already-applied versions exactly like local db push does. After migration repair --status applied, no further apply work should be needed.

Actual

Hosted CLI appears to either (a) bypass the "is this version already applied?" check, or (b) interpret the row's statements column or some other field differently than the local CLI. It then attempts a plain INSERT (no ON CONFLICT) for version = '001' and aborts on the duplicate-key error.

Evidence the local state is canonical

SELECT version, name, array_length(statements, 1) AS stmt_count
FROM supabase_migrations.schema_migrations
WHERE version = '001';
-- version: '001'
-- name:    'initial_schema'
-- stmt_count: 21

All 136 rows have name populated and statements as a non-null text[] of parsed SQL.

$ supabase migration list --linked
   Local          | Remote         | Time (UTC)
  ----------------|----------------|---------------------
   001            | 001            | 001
   …
   20260363000000 | 20260363000000 | 20260363000000
   20260364000000 | 20260364000000 | 20260364000000

$ supabase db push --linked --dry-run
DRY RUN: migrations will *not* be pushed to the database.
Remote database is up to date.

What we've tried

  1. Manual SQL canonicalize. DELETE FROM schema_migrations followed by INSERT (version, name, statements) VALUES ('001', 'initial_schema', NULL), … for every local file. Hosted CLI failed with the same INSERT collision. (We suspected statements=NULL triggered re-apply.)
  2. supabase migration repair --linked --status applied <version> for all 136 versions. This populated statements correctly. Local CLI reports in sync. Hosted CLI still fails with the same INSERT collision. ← current state
  3. Empty commit re-push to retrigger the check. No change.

Prior to canonicalize, the same project was failing with Remote migration versions not found in local migrations directory — different error pattern from when the tracking table had legacy mixed-format rows (suffixed-only version='019_rls_policies' instead of canonical version='019', name='rls_policies'). The repair fixed that error but uncovered this second one.

Ask

  1. Confirm the exact command sequence the GitHub App runs on push to a production branch (so we can reproduce locally).
  2. Identify what about our schema_migrations content the hosted CLI considers "needs re-apply" despite migration repair having marked everything applied.
  3. Provide a documented procedure for "I have manually rebuilt schema_migrations and need the hosted CLI to accept it as ground truth" — currently the local CLI honours migration repair while the hosted one does not.

If this is a known issue, a workaround we can apply repo-side (config flag, file content marker, etc.) would also unblock us.

Environment

  • Local OS: Windows 11 Pro 26200, bash via Git Bash
  • Node 20.x, pnpm
  • Supabase CLI: npx supabase@2.95.2
  • Not using Supabase Preview Branches (single-DB-per-project); failing check fires on direct production-branch push.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions