Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote schema type merging #2494

Closed
michaelhayman opened this issue Jul 9, 2019 · 21 comments
Closed

Remote schema type merging #2494

michaelhayman opened this issue Jul 9, 2019 · 21 comments
Assignees
Labels
c/actions Related to actions c/remote-joins Related to remote joins k/enhancement New feature or improve an existing feature k/question support/needs-action support ticket that requires action by team

Comments

@michaelhayman
Copy link

michaelhayman commented Jul 9, 2019

Hi!

I need to be able to return hasura types from my remote schema, so that the apollo client can automatically refresh its cache without having to do a second graphql call (one of the main advantages of apollo client!)

I'm building my remote schema with Apollo like this:

  const localExecutableSchema = makeExecutableSchema({
    typeDefs: localSchema
  })

  const hasuraTypeDefs = fs.readFileSync(path.join(__dirname, "schema.graphql"), "utf8")

  const remoteExecutableSchema = makeExecutableSchema({
    typeDefs: hasuraTypeDefs
  })

  const newSchema = mergeSchemas({
    schemas: [
      localExecutableSchema,
      remoteExecutableSchema,
    ],
    resolvers: resolvers
  })

I downloaded the types from Hasura like this:

gq http://localhost:8080/v1/graphql -H 'X-Hasura-Admin-Secret: *********' --introspect > schema.graphql


It's definitely loading all the types into apollo-server 🎉, but then I get this error from Hasura. What's happening here? Is there some way to disable the aggregate fields other than editing the file? Or some way to make apollo-server/Hasura deal with these types?

graphql-engine_1  | {"timestamp":"2019-07-09T12:30:50.027+0000","level":"warn",
"type":"metadata","detail":{"message":"Inconsistent Metadata!",
"info":{"objects":[{"definition":{"definition":{"url":null,"headers":[],"url_from_env":"NODE_SCHEMA_URL","forward_client_headers":true},"name":"node-server","comment":null},
"reason":"types: [ 
profile_responses_aggregate_fields, 
slides_aggregate_fields, 
user_notification_preferences_aggregate_fields, 
users_public_aggregate_fields, 
..., <and so on, all "aggregate" fields for every table> ] 
have mismatch with current graphql schema. HINT: Types must be same.","type":"remote_schema"}]}}}
@lexi-lambda
Copy link
Contributor

To clarify: are you running apollo-server behind Hasura, or are you running Hasura behind apollo-server? That is, is your client connecting to Hasura or to apollo-server?

  • The former situation is probably simpler: if you’re running apollo-server behind Hasura, set things up so apollo-server doesn’t know anything about Hasura’s schema at all. Just let Hasura do the schema merging on its own (it happens automatically by creating a remote schema), and connect directly to Hasura from your client. The client will see the merged schema, and apollo-client can do its caching as usual.

  • If it’s the latter, and you’re running Hasura behind apollo-server, you probably don’t want to bother with setting up a remote schema in Hasura, since you’ll end up indirectly merging Hasura’s schema with itself. Instead, you’ll want to use Apollo’s own support for remote schemas and do the merging explicitly from that end. But you’ll have to be careful, since Hasura’s schema is dynamically generated based on the current role, so statically fetching the Hasura remote schema won’t do the right thing.

Either way, set up the remote schema in Hasura or apollo-server, but not both. Doing the merging on both ends means you’re creating a schema cycle, and both servers will end up trying to re-merge the merged schema (which already contains their local schema) with their local schema. If either schema changes, the schema will become inconsistent with itself because it’s trying to merge the old schema with the new one.

@michaelhayman
Copy link
Author

michaelhayman commented Jul 10, 2019

It's the former, remote schema behind hasura.

How do I return a hasura type to the client then on a custom resolver? Any resolver I define in my remote schema has to return some kind of type. And without returning the exact type that hasura would return, no caching happens.

E.g. a resolver for updating a user's account with special logic (verifying a SMS token, for example) which can't be handled purely by a hasura update. This should return the user record as defined by the user type, but as that type is defined in hasura, I can't (as far as I know). So the client has to again manually refetch that data from hasura to update its cache, partially defeating the purpose of using apollo-client in the first place...

TLDR Currently I'm returning booleans from apollo resolvers when I should be returning records, because I lack the hasura-defined types in my remote schema.

@michaelhayman
Copy link
Author

I had talked about this in the hasura channel on discord a few months ago, and there the team (or the honorary members) suggested I do this. It's just falling over on the aggregate piece, hopefully there's some way around that!

@lexi-lambda
Copy link
Contributor

If you want to share types between your remote server and Hasura, you do, indeed, need to duplicate those types in your remote schema. I would just recommend doing it manually, and only for the types you need, to minimize type inconsistency errors. As long as the types in your remote schema are identical to the types generated by Hasura, it should work okay.

It’s a bit of boilerplate to maintain the duplicate types, but it’s not necessarily bad boilerplate, since if you only copy the types you need, then a schema inconsistency error is more informative: it probably means you have to update some logic in your remote schema to accommodate the change. If you just copy all the types over, you’ll likely get a lot of spurious inconsistency errors that your remote schema doesn’t care about, so the consistency checker won’t be as useful to you.

That said, it’s also possible that there’s something wrong with the way Hasura checks consistency of the *_aggregate_fields types, so maybe there’s a bug in there, too. Does the issue happen for any Postgres schema?

@tirumaraiselvan
Copy link
Contributor

Yeah, it is very interesting that the error reports that there is a mismatch between only the aggregate_fields. Ideally, there shouldn't be any problems if the types are exactly the same.

@ecthiender
Copy link
Member

@michaelhayman if you're using Hasura to merge your remote schema, I am not sure then why would you write mergeSchemas code in apollo-server? Can you clarify/elaborate a bit more about your setup?

Also, pasting the relevant portions of your localSchema would help. It would be awesome if you can give us a heroku app link with the minimal schema required (and the problematic remote schema added) to reproduce the issue. That would help us debug faster.

@coco98
Copy link
Contributor

coco98 commented Jul 11, 2019

@michaelhayman

I think this is something we should be able to solve with remote joins (that allow relationships with remote schemas):

  1. Have "independent" custom resolvers / types in your remote schema
  2. The return types (queries, mutations) can contain a field that can reference a Hasura type. Say, user_id is a field in the verify_sms_token mutation. user_id is a also a unique id for an entry in the user table in Postgres.
  3. You create a relationship from user_id to the user table in Hasura (in the remote schema configuration at Hasura)
  4. Frontend clients can traverse the user model (and anything else that the user is related to) in their GraphQL queries, even though the remote schema doesn't know how to fetch user data or whatever else that user is related to.

The current remote joins feature supports database to remote schema relationships so you can do this:

query {
   user_in_postgres {
     id
     name
     remote_schema {
       more_info
     }
   }
}

But once we also support remote schema to postgres relationships, you can do this:

query {
  remote_schema {
    user_id
    more_info
    user_in_postgres {
      id
      name
    }
  }
}

Your custom resolvers and types can return completely independent types, but you can still get the benefit of the graph and the resolver logic in a different part of the stack via a relationship that is resolved by Hasura.

Do let me know if that makes sense and if that would solve the problem?

@michaelhayman
Copy link
Author

So from my remote schema I return an ID, and then I can use that to do a join in Hasura via remote schemas.

I think this would work perfectly, if it also works for mutations :)

@dsandip dsandip added k/enhancement New feature or improve an existing feature and removed k/question labels Jul 16, 2019
@michaelhayman
Copy link
Author

Just to follow up, will this support mutations? That's the use case - I update a record(s) or do some action via a resolver, and want that resolver to return the updated record in the exact same way as it would if I queried it; that way the client can automatically update its cache :)

Currently I just return 'true' so Apollo client has to do an additional query to update its cache.

Thanks!

@coco98
Copy link
Contributor

coco98 commented Jul 16, 2019

@michaelhayman Yep, that's exactly the idea!

The process of doing it is that: your "resolver" basically returns a user_id, or product_id or something like that and hasura will expose that as the full type with its relationships etc. That way, the client can control what specific fields of the updated object/objects it needs in the mutation response for the client cache update.

@shanecontinued
Copy link

@coco98 when will remote joins be released? I saw the announcement several months ago.

@marionschleifer marionschleifer added the c/actions Related to actions label Oct 14, 2019
@marionschleifer marionschleifer added the c/remote-joins Related to remote joins label Nov 3, 2019
@marionschleifer marionschleifer added the support/needs-action support ticket that requires action by team label Nov 20, 2019
@0xGosu
Copy link

0xGosu commented Nov 29, 2019

+1 for this feature to be implemented soon. Also can you guys please allowing this: Nodes from different GraphQL servers cannot be used in the same query/mutation

@marionschleifer
Copy link
Contributor

This issue will be solved by remote joins which will be released in the next few weeks. Closing this issue.

@tejasmanohar
Copy link

Is there an update on when remote joins will be released? I also want to be able to "return rich hasura-generated types" from my custom mutation

@tirumaraiselvan
Copy link
Contributor

@tejasmanohar The new Actions feature allows you to create custom mutations and connect it with the rest of the graph. See https://hasura.io/docs/1.0/graphql/manual/actions/action-connect.html

@peitalin
Copy link

peitalin commented Aug 18, 2020

This issue will be solved by remote joins which will be released in the next few weeks. Closing this issue.

@tirumaraiselvan @marionschleifer @lexi-lambda

I don't think this thread answers the original question. @michaelhayman asks why the generated graphql schema from:

gq http://localhost:8080/v1/graphql -H 'X-Hasura-Admin-Secret: *********' --introspect > schema.graphql

Cannot be merged into the Hasura graphql schema, and spits out a "Types must be same" error.
The graphql introspection should generate identical schemas no? So remote schema stitching should work and this is clearly a bug.

If this is not possible, teams have to maintain two sets of graphql types that are identical in all ways, except for the __typename....Products and products.
This also breaks Apollo caching and basically makes Hasura a non-option for teams thinking of migrating their existing Graphql infrastructure over to Hasura.

Actions and Remote joins do not really solve the issue if they have semi-complex mutations that require libraries, etc. as that would require rewrites of up-stream services.

@tirumaraiselvan
Copy link
Contributor

Hi @peitalin

Certainly you can reuse the exact same types in your remote schema and Hasura will not complain. You do not need to define Products and products, just one of them will do.

The issue is how do you automate this and the answer is it is not easy because of the cyclical dependency. The remote schema supposedly adds new types and fields into Hasura, so Hasura has those types and fields too. But the remote schema itself uses the introspected hasura schema so it's not clear what will happen during runtime.

Now, in your particular example: you are copying the types semi-automatically by generating a static schema.graphql file (prior to adding any remote schemas). You are then loading this into your remote schema. This should ideally work but because the schema generated by hasura is fairly complex, there could be issues in compatibility checks. If this is the pattern you want to pursue then pls do create a new issue since this one has a mix of discussions.

@ZelimDamian
Copy link

Hi @peitalin

Certainly you can reuse the exact same types in your remote schema and Hasura will not complain. You do not need to define Products and products, just one of them will do.

Hi @tirumaraiselvan, based on this reply I would assume that acquiring Hasura's schema through introspection and using the types from there to expose some custom queries and mutations should work. But it doesn't for me and apparently the OP as both get the same error message. And btw I'm getting the error not just for the aggregate types but also "data" types like users/accounts/etc.

The issue is how do you automate this and the answer is it is not easy because of the cyclical dependency. The remote schema supposedly adds new types and fields into Hasura, so Hasura has those types and fields too. But the remote schema itself uses the introspected Hasura schema so it's not clear what will happen during runtime.

I'm also experiencing this cyclical dependency once I was able to merge remote schema (by removing all Hasura types from the remote schema and using primitive types instead) and then introspecting Hasura again. This seems like a problem once the "have mismatch with current graphql schema" problem is resolved.

Now, in your particular example: you are copying the types semi-automatically by generating a static schema.graphql file (prior to adding any remote schemas). You are then loading this into your remote schema. This should ideally work but because the schema generated by Hasura is fairly complex, there could be issues in compatibility checks. If this is the pattern you want to pursue then pls do create a new issue since this one has a mix of discussions.

This applies to me as I'm attempting to do exactly that by extracting the "data" types and ignoring things like queries and mutations. Doesn't work:

        "reason": "types: [ users_select_column, users_order_by, users, subscription_root, query_root, users_bool_exp ] have mismatch with current graphql schema. HINT: Types must be same.",
        "type": "remote_schema"

@ZelimDamian
Copy link

ZelimDamian commented Dec 12, 2020

An update:

I was able to make it work as described in the previous comment, but only when I removed all "backwards" relationships from the tables.

Given two tables: User and Patient. Where the Patient table has a foreign key user_id pointing to the corresponding User. Hasura allows creating two relationships based on the foreign key: Patient.User (the "forward" relationship as it follows the direction of the foreign key) and User.Patient (the "backwards" relationship).

The former "forward" relationship works as expected. The "backwards" relationship breaks external schema merging with the error described above.

@JCMais
Copy link

JCMais commented Jan 12, 2021

@tejasmanohar The new Actions feature allows you to create custom mutations and connect it with the rest of the graph. See https://hasura.io/docs/1.0/graphql/manual/actions/action-connect.html

@tirumaraiselvan so this is not possible with remote schemas alone?

Are there plans to create a remote join for remote schemas? Just like there is for actions?

Example: Mutation UpdateUser in the remote schema returns an author_id: ID field, we are then able to create an updatedUser in Hasura using a remote join from UpdateUser.author_id to the table users.id.

If there is an issue with this, please do let me know, as this was the only one I found. If you think it's better to create a separated issue, let me know.

I find it counter-intuitive having to add an Action for this, as I'm already using a remote schema.

Edit: Looks like there is already an issue for this: #5801

@jgoux
Copy link

jgoux commented Feb 15, 2021

I was so happy that I could reuse Hasura's types to build custom mutations with Nexus.

Then I tried to add my Nexus API as a remote schema. 😭

image

Simple example with the remote mutation : createTask(input: CreateTaskInput!): task!

My task type is strictly equivalent between Hasura and Nexus (as it's coming from Hasura and code generated for Nexus).

task on Nexus :
image

task on Hasura :
image

I would expect Hasura to assert that the types are strictly identical and infers the relationship automatically. 👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c/actions Related to actions c/remote-joins Related to remote joins k/enhancement New feature or improve an existing feature k/question support/needs-action support ticket that requires action by team
Projects
None yet
Development

No branches or pull requests