-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supporting Input objects? #4
Comments
Thanks for the kind words!
I spent a little time thinking about it, and I think this particular schema evolution challenge you mention is the main difficulty. I'll first try to explain the problem clearly for posterity. In particular, Argo relies on selection set information guaranteed to be present in the query in order to determine the fixed order of fields in response (i.e. non-Input) objects. This is not possible for Input objects, which are only declared in the schema. Furthermore, the schema is allowed to evolve in backwards-incompatible ways, some of which can affect field order. (A similar schema evolution problem is why Argo represents Enums as STRING instead of a number.) For example, Imagine a mobile client is making a request. It's code is based on a version of the schema where an Input object named After some time passes, the server is updated with a backwards-compatible change which adds a nullable field: If Argo tried to encode an Example object by relying on field order, this would make a normally backwards-compatible change a breaking change. Backing up a little, we can start with the easy parts:
So, to solve for the Input object case, we can use an encoding similar to the self-describing format for objects. Say: Write a Label in Core encoding the number of fields which follow. Each non-Absent field follows in order, each written as a STRING capturing the field name, then recursively write the field’s value using the normal Input encoding (the Absent label is permitted but discouraged). These field names and values alternate until completion, all concatenated together. I could work this into the spec without much difficulty or adding too much complexity. That said, I feel compelled to say that the expected payload size reduction here will typically be very small, and it might not be worth it. But maybe some of your customers have enormous input payloads (that aren't dominated by large string data)! |
On input types, the spec says:
Can you describe at a high level how input would be represented? It's unclear to me how to encode inputs in a stable way while keeping the ability to evolve the schema (ie adding optional fields).
Thanks and great work on Argo!
The text was updated successfully, but these errors were encountered: