Unless there are last thoughts, this is done. Did final edits. Published in graphql-js 0.12. R.C of 0.13 is basically ready. SDL becoming the law of the land.
Q: Patents, how does this affect spec versions that are published/unpublished?
A: Unsure, relicensing says that everything after that commit is held to new licensing. Next major cut of spec has the patent grant.
Aim to let you break up a GraphQL schema into multiple files. Released a version, more and more people are sending in PRs. A lot of people have adopted the technique, fits well with the SDL spec.
Currently works inside a comment, but would be interested in seeing whether the keyword could be supported,
Q: Could we add directives at the top of a file?
A: General consensus for more directives.
Can be tricky with tools using the import statement in different ways. A top-level directive could provide the right syntax, and let the tools add the support.
Action item: Spec/Reference implementation
- Graphcool crew looking at it
Sidenote: this makes you want type extension, as importing and extending is a pretty common pattern. GraphQL.JS could need improvement here.
Aims to provide a spec based on existing behaviors.
No explicit error codes, should always return { data: any, errors: any[] } aim is to try and consolidate how APIs work with POST/GET and query batching.
Like that we all agree generally, but it’d be good to improve operability between different server tools.
Would provide a way to support new cases like query-batching in a way that we can support across more graphql implementations.
Also would like to support finding a way to get metadata about the GQL server, so tools need less configuration. Could extend HTTP 1.1.
C: This might be a bit strict/overbearing
C: If the goal is to describe public APIs that should inter-op, then this could be really useful. Could come out of a collaboration for a lot of the companies with a GQL API already. So we can make some kind of agreement. Want to be careful about where that line is. Aim is to spec as little as possible, and let code win arguments.
C: Request params/ GET/ POST things should be in there. On the other hand batch APIs still don’t have a clear single winner, so we should avoid specing that.
C: We still don’t even agree on what a successful or error response looks like. Makes it hard to do tooling.
Action items:
-
Figure out what should be in the spec
-
Figure out how to move MAYs to MUSTs
-
Consider people who are working on public APIs to collaborate
Right now there is two types of scalars, higher level concepts (typealiases adding semantic meaning to existing ) and new transport primitives (Long, BigInt etc).
SDL syntax has the same keyword for that. RFC adds a change to change the wording around making the two different in syntax.
Current proposal requests only working from a built in scalar.
Needs to be to addressed in a backwards way.
Want to differentiate SDL formats vs GraphQL information with backwards compatibility. Need to be able to go from a schema to an SDL.
Idea: parse AST, for the data types, then we could add a validation step for this kind of system. The spec does have a clause for this, but leaves it to user code.
Q: Syntax wise, could feel closer to TS/Flow styled JS - maybe could switch the order?
Q: Custom keyword feels a bit unspecific for what it does
In the PR, some of these are discussed.
Ideally should be available for introspection.
Summary:
-
That PR needs some more time/energy from our discussion
-
Needs to support old schemas
-
Most scalars are custom
Artsy use case: Live bidding platform for auctions. As a participant you see a real-time sub-second latency, updates can happen to many lots of hundreds. Current WS. The socket makes assumptions about what the client is interested in.
Subscriptions could be what is needed for live queries. The root field of a subscription would be the live query. Then subsequent queries can get delta updates. Connecting could include a timestamp to show when to send live data from, to handle reconnects.
Needs a way to decouple the update from the shape of the query. A scalar/obj could handle with nullability, but how do you deal with arrays, or deep hierarchies?
C: FB has done similar things, and bailed. It’s too tough to go from 1 query and handle the live changes.
C: We do polling live queries, keep sending the full payload. Fixes the array patching, or missed deltas. Not optimal. Saw serious issues with server resources with connections to dbs etc.
C: Event based subscriptions are what we’re talking about publicly. They can have tiny sub queries. So a new bid comes in we can make the query for highest bidder. This has been more robust.
C: No right answer.
C: Tried the same thing, hit the same roadblocks. Ended up with event subscriptions. Super hard.
C: A lot of this is challenges of implementation, complexity may be specific to a backend so spec changes might not make sense.
C: For clients change patches should ideally be invisible, can move it to client-side transport.
C: Need for a layer between there.
C: Could work like keyframes do in MPEG, which send the full data every so often. A video is similar in its problems.
C: What does it take to find some space to play with an idea like ^
C: Need more examples of it actually working, so we can talk about tradeoffs.
Action items:
- Keep thinking about this
Found it hard to encourage all developers to use connections, ID because of it’s link to relay.
Desire to move the Relay spec into a best practices section. Want to decouple connections from relay specifically. Global ID and connections are a great pattern.
C: Want to be cautious about this slowing to adoption for graphql?
Questions:
-
ID field make sense?
-
Does the Relay ID make sense?
-
Does pagination make sense?
In favour of splitting into more multiple sections:
- Cursor per item feels weird to be normative
Leeb has a doc that is basically this, which has other alternatives to connections
C: Perhaps “pattern” as a term, is a best option?
Relay modern doesn’t use the global ID
Q: Is there momentum to a normative version of the node interface
A: Sashko is looking at ^
Only called relay due to the ordering of releasing. Should be important to stress
Should stay away from GraphQL clients that impose a spec on servers
Action items:
-
Artsy to take a look into sending some PRs
-
Leeb to send us an older doc on pagination ideas around the time of making connections
From GitHub, with public APIs, about trying to talk about data describing the data. Building something which shows how to describe changes .e.g a field is removing in 6 months. Want to be able to provide that kind of metadata.
Want to explore the solution space. Is the schema responsible for this kind of thing? E.g. like deprecation as a special case.
C: Could it be about metadata at the whole schema level, but at type/field level
C: want to do this from apollo too, cache control directives in the SDL
C: Mobile team want to add commit metadata to the schema
C: Could fit with the import top level directives?
C: How do I add client directives that don’t generate into SDL, want to preserve non-client-visible directives and are future proof
C: Is there an input SDL and an output SDL?
C: Want the server’s view as being the same as the clients
C: Could move deprecation to metadata?
C: Could be verbose with cache/security controls
Is metadata tied to introspection? Most people start at SDL then move to representation.
Action items:
-
Marc looking at this with apollo
-
Try make it general enough
1 place went with error types defined in a schema. 2nd went with errors as strings. Spec doesn’t really say much about it, but they both sit at pretty far away implementations. Some clients only look for the raw data, and ignore the errors.
Ideally we want to be able to differentiate between between user error or developer error. E.g. user goes to the wrong place, vs post missing.
C: Most errors don’t need depth. Most errors are exceptions. Could consider a few set of options.
C: The equivalent of 422 is really hard to represent in graphql. Like validation vs verification errors.
C: In many languages its fine to have exceptions for non-critical issues. GraphQL errors don’t feel good for user validation. Don’t know if all errors as exceptions needs to be too harsh a feeling.
C: An error doesn’t always act like an exception, e.g. a query with many mutations could have failed on an individual spec instance.
C: Maybe the spec is a bit unstructured here, because FB has an error, not sure if it’s the same for all usages
C: There is an internal error, which is a critical error, stopping all execution. All errors have error codes. We have a bool for whether the error could be transient (e.g. can you re-fetch)
Is there a way of re-thinking your example like isUsername taken to make it a possible field?
The question here is: is this really an app developer level problem, or at a graphql spec level? People have pretty different needs.
Want to be able to have some kind of standardization for an error spec that can offer ways to client errors?
Errors are exception is the current norm, product uses could be fit as data? Error medata maybe? Set of error codes that describe the error types.
Last WG: Instead of static error codes, could talk about some kind of schema definition for additional fields in an error. Then you could have a code field, and assign enums etc.
Moved offline: Errors would need to go to the schema.
Current spec says if client error, you send data, if you have exceptional error you don’t send data
Action Items:
- Is “Errors are Exceptional” the right answer? Should that get added to the spec?
Want to care about offline caches, offline mutations and sync data back up afterwards. Ended up with general purposes like Entity, Changes, Objects
[Discussion is hard to summarize I’m afraid]
Action Items:
- Add queries that wouldn’t work to show how it can be shown in queries that weren’t possible before and how they work after in the RFC/POCs