Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hasura Migrate Error - cannot continue due to inconsistent metadata #5513

Closed
spencerpauly opened this issue Aug 3, 2020 · 4 comments · Fixed by #5548
Closed

Hasura Migrate Error - cannot continue due to inconsistent metadata #5513

spencerpauly opened this issue Aug 3, 2020 · 4 comments · Fixed by #5548
Labels
c/cli Related to CLI p/high candidate for being included in the upcoming sprint

Comments

@spencerpauly
Copy link

spencerpauly commented Aug 3, 2020

I'm trying to do a migration from data on Hasura cloud to a DigitalOcean droplet. The

I'm following the migration guide here exactly:
https://hasura.io/docs/1.0/graphql/manual/migrations/basics.html#step-3-initialize-the-migrations-as-per-your-current-state

When it comes time to execute hasura migrate I run:

npx hasura migrate apply --envfile prod.env

with my prod.env file looking like:

HASURA_GRAPHQL_ENDPOINT='http://64.227.3.16'
HASURA_GRAPHQL_ENABLE_CONSOLE=true

and it gives me the following error...

FATA[0002] apply failed: [unexpected] cannot continue due to inconsistent metadata ($.args[2].args)
File: '1596433479181_newmigration/up.yaml'
...
Followed by my whole up.yaml file.

I've seen this issue posted repeatedly in the discord channel in the past couple weeks, but see no github issue open for it so I thought I would post it here. I'll answer any questions you have. Thanks.

@spencerpauly
Copy link
Author

I just solved this for myself. I believe the issue was from myself having tables of an enum type and when I was migrating the migration and metadata files these tables had to be remade AND be filled with data or else they would not be valid enum tables. The migration and metadata yml files didn't actually put any data in those tables, though, so the creation would fail and therefore the whole migration would fail.

My solution was to just not use enum tables for the migration. I think this needed much better error messages, because it was a pain to debug in the CLI. I only noticed the issue when I manually exported and imported the schemas and the console gave much better error messages.

@tirumaraiselvan
Copy link
Contributor

That's definitely not a good experience. We will fix the error message.

Also see issue for the main issue here: #2817

@jflambert
Copy link
Contributor

Hi @tirumaraiselvan I have the same issue overall but with slightly different replication steps. Basically I'm using tables.yaml with v2 migration and whenever I have a slight problem, this output appears:

time="2020-08-04T16:45:17Z" level=fatal msg="failed to apply metadata: cannot apply metadata on the database: [unexpected] cannot continue due to inconsistent metadata ($.args[1].args)\r\noffending object:

And then I see the entire contents of cron_triggers, tables, functions, etc... kilometers of yaml that's definitely not helpful.

I think I'd just be satisfied with the error above, if fetching the actual error is too hard.

@cindyloo
Copy link

cindyloo commented Feb 12, 2021

I am getting something similar with metadata apply: ( I am using hasura 1.4 alpha though)

FATA[0002] failed to apply metadata: cannot apply metadata on the database: [unexpected] cannot continue due to inconsistent metadata ($.args)

reason: HTTP exception occurred while sending the request to http://host.docker.internal:3005/graphql
type: remote_schema
definition: 
{
  "comment": null,
  "definition": {
    "forward_client_headers": true,
    "timeout_seconds": 60,
    "url_from_env": "GRAPHQL_BACKEND_URL"
  },
  "name": "backend",
  "permissions": []
} 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c/cli Related to CLI p/high candidate for being included in the upcoming sprint
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants