Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting Input objects? #4

Open
jbourassa opened this issue Dec 8, 2023 · 1 comment
Open

Supporting Input objects? #4

jbourassa opened this issue Dec 8, 2023 · 1 comment

Comments

@jbourassa
Copy link
Contributor

On input types, the spec says:

However, this is a natural, straightforward, and backwards-compatible extension which may be added in a future version.

Can you describe at a high level how input would be represented? It's unclear to me how to encode inputs in a stable way while keeping the ability to evolve the schema (ie adding optional fields).

Thanks and great work on Argo!

@msolomon
Copy link
Owner

msolomon commented Dec 8, 2023

Thanks for the kind words!

Can you describe at a high level how input would be represented? It's unclear to me how to encode inputs in a stable way while keeping the ability to evolve the schema (ie adding optional fields).

I spent a little time thinking about it, and I think this particular schema evolution challenge you mention is the main difficulty. I'll first try to explain the problem clearly for posterity.

In particular, Argo relies on selection set information guaranteed to be present in the query in order to determine the fixed order of fields in response (i.e. non-Input) objects. This is not possible for Input objects, which are only declared in the schema. Furthermore, the schema is allowed to evolve in backwards-incompatible ways, some of which can affect field order. (A similar schema evolution problem is why Argo represents Enums as STRING instead of a number.)

For example,

Imagine a mobile client is making a request. It's code is based on a version of the schema where an Input object named Example has this definition:
input Example { field1: Int }

After some time passes, the server is updated with a backwards-compatible change which adds a nullable field:
input Example { field0: Int, field1: Int }

If Argo tried to encode an Example object by relying on field order, this would make a normally backwards-compatible change a breaking change.

Backing up a little, we can start with the easy parts:

  • Documents/operations/queries are unaffected by Argo vs. JSON vs. other formats (including field arguments of all kinds), so what we are really talking about is how to encode Variables
    • I could imagine wanting to use custom scalars as field defaults using e.g. an Argo BINARY encoding, but let's leave that aside for now
  • The JSON representation for variable values is a JSON object where fields are named for their values, and the values are encoded as you would expect
  • In Argo, we can avoid including variable names in the payload by constructing a RECORD type deterministically: say, walking the document/query and creating a typed field for each variable in the order they appear in the document
    • Scalars, lists, enums, non-nulls and absent values can all use their usual Argo representations
    • Input objects can't use an analogous representation as regular objects for reasons described above

So, to solve for the Input object case, we can use an encoding similar to the self-describing format for objects. Say:

Write a Label in Core encoding the number of fields which follow. Each non-Absent field follows in order, each written as a STRING capturing the field name, then recursively write the field’s value using the normal Input encoding (the Absent label is permitted but discouraged). These field names and values alternate until completion, all concatenated together.

I could work this into the spec without much difficulty or adding too much complexity. That said, I feel compelled to say that the expected payload size reduction here will typically be very small, and it might not be worth it. But maybe some of your customers have enormous input payloads (that aren't dominated by large string data)!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants