Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON API v1.0rc1, simplified #237

Merged
merged 8 commits into from Jul 5, 2014
Merged

Conversation

dgeb
Copy link
Member

@dgeb dgeb commented Jun 23, 2014

Note: see the revised format page to review the heart of this PR.

This PR simplifies the original PR #234 by removing two of its most controversial
features:

  • Heterogenous (i.e. polymorphic) relationships - This feature adds a lot of
    complexity for implementers, since it requires that type and id must
    always be supplied together. This led to awkward URL slugs that merged
    type:id. Furthermore, based on my personal experience and that of other
    implementers, polymorphism in APIs tends to be an anti-pattern that
    can almost always be solved more appropriately with a typed solution. I no
    longer think that it warrants inclusion in this spec.
  • Non-UUID Client IDs - Allowing non-UUID IDs generated on the client to be
    passed to the server puts a heavy requirement on server implementers, who
    would need to maintain a map of client IDs to newly formed server IDs while
    processing requests. My recommendation is to encourage the use of UUIDs, which
    can be generated by the client and/or server and use the single id field.

My strong preference is to discuss these two features in separate issues during
the RC phase, such that the rest of this PR may be merged sooner than later.

Below is the text of the original PR, modified to reflect the state of this PR.


This is a fairly thorough rewrite of the spec that formalizes its structure and
addresses many outstanding issues.

The goal of this rewrite is to prepare JSON API for 1.0 versioning with a
release candidate phase. We would like to shake out any breaking changes during
this RC phase to provide implementers the degree of confidence they deserve from
this spec.

Any and all feedback is welcome, either in this PR or in the specific issues that it addresses.

Changes include:

  • New introduction explains the basics requirements and goals of JSON API. It
    also broadly explains the new optional PATCH support and how "JSON API
    represents all of a domain's resources as a single JSON document that can act
    as the target for operations".
  • New "Conventions" section explains SHOULD, MUST, etc. keywords
  • New "Document Structure" section describes the JSON API media type
    (application/vnd.api+json). This media type is used for both request and
    response documents (except for PATCH requests/responses).
  • Introduces the option to key the primary resource by the generic data key.
  • Clarify different representations allowed for singular and plural resources.
  • BREAKING CHANGE: Singular resource objects SHOULD now be be represented with
    JSON objects instead of arrays. This allows for symmetrical representations in
    request and response documents, as well as PUT/POST requests and PATCH
    operations. It also simplifies implementations which do not support batch
    operations (i.e. they can allow an object and not an array).
  • Define URLs for resources, collections of resources and resource
    relationships. Allow for alternative URL definitions to be specified in
    responses.
  • BREAKING CHANGE: Allow the baseline implementation of JSON API to operate via
    POST, PUT and DELETE alone (no PATCH required). This introduces brand new
    specs for updating resources via PUT and updating relationships via POST and
    DELETE requests to newly specified relationship URLs. It also specifies how
    resources can be created, updated and deleted in bulk (but only per type).
  • Introduce alternative JSON PATCH syntax for all operations. This builds off
    the current spec's approach to updating relationships. JSON PATCH bulk
    operations are discussed.
  • Introduce a "Filtering" section in "Fetching". Since we encourage keeping all
    resources accessible at the root level, it corresponds that root level
    filtering should be encouraged instead of filtering via nested routes.
    Furthermore, root level filtering is more flexible because it allows for more
    than one filter to be applied to a collection.
  • Introduce error objects, which are specialized resource objects that MAY be
    returned in a response to provide additional information about problems
    encountered while performing an operation.

dgeb added 5 commits June 9, 2014 14:58
* New introduction explains the basics requirements and goals of JSON API. It
  also broadly explains the new optional PATCH support and how "JSON API
  represents all of a domain's resources as a single JSON document that can act
  as the target for operations".

* New "Conventions" section explains SHOULD, MUST, etc. keywords

* New "Document Structure" section describes the JSON API media type
  (application/vnd.api+json). This media type is used for both request and
  response documents (except for PATCH requests/responses).

* Introduces the option to key the primary resource by the generic `data` key.
  This key SHOULD be used for variable type (i.e. heterogenous) resource
  collections and MAY be used for constant type (i.e. homogenous) resource
  collections.

* Clarify different representations allowed for singular and plural resources.

* BREAKING CHANGE: Singular resource objects SHOULD now be be represented with
  JSON objects instead of arrays. This allows for symmetrical representations in
  request and response documents, as well as PUT/POST requests and PATCH
  operations. It also simplifies implementations which do not support batch
  operations (i.e. they can allow an object and not an array).

* Define URLs for resources, collections of resources and resource
  relationships. Allow for alternative URL definitions to be specified in
  responses.

* BREAKING CHANGE: Allow the baseline implementation of JSON API to operate via
  POST, PUT and DELETE alone (no PATCH required). This introduces brand new
  specs for updating resources via PUT and updating relationships via POST and
  DELETE requests to newly specified relationship URLs. It also specifies how
  resources can be created, updated and deleted in bulk (but only per type).

* Introduce alternative JSON Patch syntax for all operations. This builds off
  the current spec's approach to updating relationships. JSON Patch bulk
  operations are discussed.

* Introduce a "Filtering" section in "Fetching". Since we encourage keeping all
  resources accessible at the root level, it corresponds that root level
  filtering should be encouraged instead of filtering via nested routes.
  Furthermore, root level filtering is more flexible because it allows for more
  than one filter to be applied to a collection.

* Introduce a `clientid` key that can be used to correlate a resource on the
  client with a newly created resource on the server.

* Introduce error objects, which are specialized resource objects that MAY be
  returned in a response to provide additional information about problems
  encountered while performing an operation.
Instead of hiding `a` elements except on hover,
display the ¶ tag as :after content on :hover.
This keeps the elements always on the page and
allows the links to work in all scenarios.
State balance between efficiency, readability, flexibility and discoverability.
These two aspects of the v1rc1 proposal have proven to be the most
controversial. They've been removed for now with the intention that 
they will be considered separately.
@ahacking
Copy link

I still think there are some issues with the reference model:

  1. I don't understand why all objects are not under typed collections in the root object and why a nested linked key is required.
  2. when PATCH is considered, point 1 makes it more complex to update a graph as objects logically exist under many different keys, ie "posts" and "linked" / "posts" depending on the endpoint url.
  3. PATCH doesn't refer to objects by their ID but instead by array index and this can't really work.
  4. Client provided UUIDs vs server generated ID's seems to be a differentiating factor for API design:
  • You don't need typed arrays at the root level because the id space is global.
  • You can use PATCH compatible documents that don't rely upon the vagaries of array indexes.
  • You can remove the "linked" bag

Since I want simplicity to express and serialize arbitrary object graphs with minimal complexity I will probably not use this spec in its current form as it doesn't provide a solid resource reference model for PATCH, and instead attempts to satisfy too many competing requirements.

I've pretty much arrived at the following for my APIs:

{
    "posts": {
      "eab9b2af-73f0-4e13-b348-0bd4377fb06a": {
        "title": "My awesome post title",
        "body": "The body of the post"
      },
    },
    "comments": {
      "fae2fdf-23b6-4b9b-9657-b5586d29e6fa": {
        "comment": "a comment"
      }
      "958d840d-2597-47f9-b3ae-1b63d622db4f": {
        "comment": "Another comment"
      }
    },
    "meta": {
    },
    "errors": {
    }
}

But I am considering a flat collection because it aligns better with a single identity map approach:

{
    "data": {
      "eab9b2af-73f0-4e13-b348-0bd4377fb06a": {
        "type": "post",
        "title": "My awesome post title",
        "body": "The body of the post"
      },
      "fae2fdf-23b6-4b9b-9657-b5586d29e6fa": {
        "type": "comment",
        "comment": "a comment"
      },
      "958d840d-2597-47f9-b3ae-1b63d622db4f": {
        "type": "comment",
        "comment": "Another comment"
      }
    },
    "meta": {
    },
    "errors": {
    }
}

There is more that I would like to say on errors in a follow up.

@BRMatt
Copy link

BRMatt commented Jun 23, 2014

  1. I don't understand why all objects are not under typed collections in the root object and why a nested linked key is required.

I seem to remember it's to allow links to other resources of the same type, while drawing a distinction between the requested resources and the linked resources.

E.g. you request people 1,2 who are both friends with 3,4 then the payload needs to separate 3 and 4 to prevent the client thinking they were the requested resources.

@ahacking
Copy link

@BRMatt that's an important use case. Thanks for that. So the distinction is the match set vs additional.

Still I think I will tackle this different in my APIs using meta data as I don't want to have to collect results into temporary arrays and prefer a streaming api design where I can just splat objects and maintain a simple uuid dictionary.

@dgeb
Copy link
Member Author

dgeb commented Jun 23, 2014

The JSON API document format is optimized for transport. Operations aren't performed against the structure of this document, as explained in the URLs section:

Collections of resources SHOULD be treated as sets keyed by resource ID. The URL for an individual resource SHOULD be formed by appending the resource's ID to the collection URL.

Therefore, operations do not target collections as arrays and do not target members of collections by position. Your first proposal matches the structure used to determine URLs. However, this structure can not be used for transport unless client-side IDs are an absolute requirement, which goes further than we're willing to go with this spec.

@ahacking
Copy link

@dgeb Thanks. I think I understand what you're saying. In that case the spec needs to clearly define a reference model/document up front and only then define "renditions" of the reference model/document that are actually used for transport. Even then I still don't think its a valid use of RFC's and a cunning sleight of hand since we are using JSON pointers against fictitious documents, which is equivalent to saying we define a transformation of JSON pointers for use with our transport documents. So really we are not actually using valid JSON Pointers at all because documents are never exchanged in the structure that JSON pointers actually reference.

I understand that you don't want to go as far as mandating UUIDs in the spec but as per my first post above, this is a real differentiating factor in API design to the point there should be two specs because I'm not really accepting the imaginary document / JSON Pointer transform as a settled position despite the goals of JSON-API. If I play devils advocate for a moment I can convincingly argue and demonstrate an approach that uses UUIDS which is also a lot simpler and far more in the spirit of the RFC's.

I appreciate the effort being put into this but for now I will keep going with my own API approach as I can't justify the conceptual and implementation baggage of the proposed JSON-API spec given it also takes me further away from the RFCs for JSON Pointer and JSON Patch. It means I either have to transform the document first or I can't use standards compliant JSON Pointer resolver, or alternatively implement a non standard JSON Pointer-like resolver to avoid transforming the document/response. I just don't see the point of it since it stems from a design fault due to trade-offs that are not relevant or even sensible for a UUID oriented API.

We have to remember it is not just transport formats but the processing rules required to implement and use a protocol which are important. The differences in processing and handling between client assigned IDs and server generated IDs make them completely different protocols to use and build applications with despite at first glance it just looks as if we are discussing the value space of an opaque id field.

@dgeb
Copy link
Member Author

dgeb commented Jun 25, 2014

I think I understand what you're saying. In that case the spec needs to clearly define a reference model/document up front and only then define "renditions" of the reference model/document that are actually used for transport.

The intro roughly outlines the reference document: "JSON Patch support is possible because, conceptually, JSON API represents all of a domain's resources as a single JSON document that can act as the target for operations. Resources are grouped at the top level of this document according to their type. Each resource can be identified at a unique path within this document."

It is further defined in the URLs section. Perhaps it should be spelled out even further.

I think it's a stretch to call it a "cunning sleight of hand" that the transport format is a modified form of the reference document. I consider the transport format to be much more pragmatic than cunning.

If I play devils advocate for a moment I can convincingly argue and demonstrate an approach that uses UUIDS which is also a lot simpler and far more in the spirit of the RFC's.

I have no doubt that such a constraint would simplify this spec further and even allow for a transport format that better aligns with the reference document. I am fully onboard with the advantages of UUIDs. However, I don't think many people want to constrain JSON API to only work with UUIDs. I see this constraint / simplification as worthy of a separate spec.

@dgeb
Copy link
Member Author

dgeb commented Jun 25, 2014

Just to follow up on the array vs. set issue: it's important to return resource collections as arrays so that they can be ordered. This allows JSON API to support sorting.

@ahacking
Copy link

@dgeb I didn't intend to convey a negative connotation, just a clever approach that solves what JSON Pointer on its own can't. I still think its cunning :) but since JSON Pointer is rather myopic and doesn't allow us to point the way we would like there are few satisfactory options.

I read those same words you referenced in the spec but didn't grok it as a "reference document/model" vs "transport document" deal. I think using the language "reference document" and "transport document" might help, as well showing an example reference document and corresponding transport documents would make the concept clearer.

JSON Pointer is very limited, so there is little choice but to do what you're doing in JSON-API. What we probably need is a JSON Path https://github.com/flitbit/json-path but there is currently no RFC/draft for that AFAIKT.

I just yesterday got burnt by the server ordering issue. That was not something I was expecting to deal with but since Javascript objects don't have a defined property ordering using arrays was the right approach. Using the reference document trick to get around the JSON pointer problem is a sensible solution and really only way out. I also had to implement a non standard JSONPointer resolver and implement some additional housekeeping for tracking what objects are results vs linked (but not simultaneously a result) was a subtle complication which required some additional bookkeeping in my serialisation logic.

I think a rationale for some of the subtle design choices in JSON-API is also important, a few of good ones:

  • the use of arrays supports results ordering vs objects keyed by id which can't be ordered without carrying a position in the objects which would not just be ugly it is also harder to process.
  • The use of a 'linked' container supporting separation of "results" from any resources that may be linked from the results. This is important so that self referencing collections can work.
  • Use of a "JSON Pointer compatible reference document" to allow use of JSONPointer and JSPON Patch RFCs

Very important design considerations but not overtly obvious unless you have been following the development of the JSON-API spec.

I'm actually pretty close to where JSON-API is now, my errors container is different and more suited to what I want (putting multiple validation error problems under a single "JSON Pointer").

@dgeb
Copy link
Member Author

dgeb commented Jun 25, 2014

@ahacking I can certainly get behind adding those clarifications to the spec. I agree that the reasoning behind some of the design choices in the spec is not overtly obvious.

I'm actually pretty close to where JSON-API is now, my errors container is different and more suited to what I want (putting multiple validation error problems under a single "JSON Pointer").

This does seem awfully close. With my proposal, you could return an array of errors, each pointing to the same resource with links, and then specify each attribute in a path that varies per error. I'd be curious why that wouldn't meet your needs.

@dgeb
Copy link
Member Author

dgeb commented Jun 26, 2014

In dgeb@f802141 I've expanded the FAQ to include the arrays vs sets question, as well as the need for a linked object.

In dgeb@49b1dd8 I've added a discussion of the "reference document" to the URLs section.

@ahacking Thanks for the pushback on this. I hope these clarifications are useful.

@ahacking
Copy link

@dgeb Thanks, looks good, I like the changes. I just added a suggestion/comment for additional clarification.

@dgeb
Copy link
Member Author

dgeb commented Jun 26, 2014

@ahacking - incorporated with a minor tweak. Thanks again!

@Matthias-Wagner
Copy link

I've got two comments:

  1. To my knowledge, RFC 7231 states that PUT may not do a partial update of an existing resource (see http://tools.ietf.org/html/rfc7231#section-4.3.4), but the proposal suggests that PUT "should" apply a partial update. Perhaps an alternative would be to allow PATCH with a content type of either application/vnd.api+json (using the semantics currently given for PUT) or application/json-patch+json (using the semantics currently given for PATCH).
  2. Updating multiple resources doesn't specify what should happen if the ids in the body do not match exactly the ids given in the URL (e.g. ids missing or additional ids).
    Thanks for any clarification and comments.

@dgeb
Copy link
Member Author

dgeb commented Jun 26, 2014

@Matthias-Wagner

In my reading, 4.3.4 is open to the concept of a partial update:

The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload. A successful PUT of a given
representation would suggest that a subsequent GET on that same
target resource will result in an equivalent representation being
sent in a 200 (OK) response.

Nowhere does it say that a resource's state must be replaced in its entirety or
that a server can't merge its own version of state with the requested update
(e.g. by appending updated_at or id fields). It just says that a "subsequent
GET on that same target resource will result in an equivalent representation
being sent". In my interpretation, this means that the state change(s) requested
must take effect for a request to be successful.


Updating multiple resources doesn't specify what should happen if the ids in the body do not match exactly the ids given in the URL (e.g. ids missing or additional ids).

This probably should be spelled out more explicitly. If the IDs in the URL are
unauthorized or not found, 403 or 404 would be appropriate, with an array of errors
indicating each problem. If there is just a mismatch, then perhaps 422? I'm open
to suggestions.

@Matthias-Wagner
Copy link

4.3.4 also contains a paragraph:
"An origin server that allows PUT on a given target resource MUST send
a 400 (Bad Request) response to a PUT request that contains a
Content-Range header field (Section 4.2 of [RFC7233]), since the
payload is likely to be partial content that has been mistakenly PUT
as a full representation. Partial content updates are possible by
targeting a separately identified resource with state that overlaps a
portion of the larger resource, or by using a different method that
has been specifically defined for partial updates (for example, the
PATCH method defined in [RFC5789]
).", added here. Basically, both of these suggestions would be useable with API+JSON. Either PATCH as I suggested above or, probably nicer, a PUT to an attribute-filtered URL which luckily enough is already defined by API+JSON.
422 seems to be a good choice for an ID mismatch when updating resources.

@steveklabnik
Copy link
Contributor

@dgeb, yes, it is very clear that PUT means 'create or replace,' not 'update. @Matthias-Wagner is correct. This has been made even more clear in httbis.

@dgeb
Copy link
Member Author

dgeb commented Jun 26, 2014

@steveklabnik but what is the definition of "replacement"?

Here are a couple scenarios which push at this definition:

  1. Imagine a user resource that has name, email and admin fields. Let's say that a user can update their own name and email but can not update the admin field. In fact, that field is not even available to them as a "non-admin". Can the user PUT their name and email? Such a request will not replace admin, which is part of the same resource.
  2. Imagine a post resource that has a title and a created_at field. created_at is set by the server, visible to users, but can not be updated by them. A PUT that includes created_at should be rejected, and yet this field is undeniably a member of the resource.

There are often aspects of a resource that can not be updated and the concept of an "entire resource" is not always clear. The utility of PUT is extremely constrained if it can not be applied in these common scenarios.

I will review the RFCs and httpbis discussions re: PUT, but these are my initial concerns.

@steveklabnik
Copy link
Contributor

http://tools.ietf.org/html/rfc7231#section-4.3.4

The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload.

and

The fundamental difference between the POST and PUT methods is
highlighted by the different intent for the enclosed representation.
The target resource in a POST request is intended to handle the
enclosed representation according to the resource's own semantics,
whereas the enclosed representation in a PUT request is defined as
replacing the state of the target resource. Hence, the intent of PUT
is idempotent and visible to intermediaries, even though the exact
effect is only known by the origin server.

etc.

Yes, those problems are why PUT is almost never what you want. POST or PATCH are more appropriate ways to update in these cases. As you say:

The utility of PUT is extremely constrained if it can not be applied in these common scenarios.

PUT was originally envisioned as "upload this file to this URL", before CGI was even a really huge thing. The issue at hand is that PUT is supposed to be idempotent, and updates rarely are idempotent.

@wycats
Copy link
Contributor

wycats commented Jun 26, 2014

FWIW: POST for update doesn't seem that crazy to me semantically, but it's certainly not what people are used to.

@wycats
Copy link
Contributor

wycats commented Jun 26, 2014

@steveklabnik In the "pre-CGI world", what about this example:

  • I PUT to /articles/hello.txt
  • The server updates the mtime of the file
  • The server changes the Last-Modified in subsequent requests

Does this violate the semantics of PUT? If so, I would claim that PUT's pedantic spec text does not reflect any plausible real-life use-case, and we can go with the real-world usage that has emerged around it.

@ahacking
Copy link

I am thinking the get out of jail card for pedantic nature of PUT is already in front of us.

We know that httpbis says one can define a sub-resource that overlaps or subsets a larger resource so that PUT that can be defined to exclude state which the server manages, or hide state which is not visible to the user. Taken to its logical conclusion, every user would have their own distinct sub resource to handle differences in access controls (ie what can be viewed/created/updated/modified) and also different endpoints for PUT vs GET and this would be logically consistent with the pedantics of PUT, something like /resource/id/user-id for GET and /resource/id/user-id/put for PUT. I am not saying to do this, but just illustrating it as an example of the reasoning process. This demonstrates quite correctly that each user has a unique relation to the servers state and this also interestingly reflects the deep reality of the universe. *[footnote]

It would seem this user/client relation requirement is satisfied by request headers already when a distinct per user/client Authorization header is included. To solve the semantics of the difference observed between a PUT and a GET, either a different URL endpoint is required or a different Authorization header possibly using the realm parameter could be used.

I would be interested to hear others take on this which could evaporate PUT semantic issues for things like server managed/read-only state like updatedAt, createdAt, or where different resource representations due to server managed state and access control is required.

[footnote]
For those interested in physics Relational Quantum Mechanics elegantly solves the observers and systems problem which has plagued physics for the past century with an analogous arrangement. When you observe a system you have a distinct relation to it. For someone like myself who has been working on distributed systems for some time and dealing with consistency issues it didn't seem that hard to avoid the "spooky action at a distance, faster than light communication" EPR type paradoxes when an information centric view is taken, and one that relates the observer to the system as opposed to being independent. I found it incredulous that physicists have been stumped whilst computer science and software developers have been solving distributed system consistency problems by COB Friday for some time. It really was a shock to me that around a century of thinking hadn't got past B observes A, C observes A, but B and C resolving each others state is a paradox. HTTP really IS trying to tackle some deep problems.

@ahacking
Copy link

ahacking commented Jul 2, 2014

@vdmgolub I've implemented both approaches, one with a generic linked object container and the other with typed linked data collections. My comment is not that sorting into type collections is not possible, it certainly is possible and I am forced to do it that way in the current spec. It is however just plain and simply WASTEFUL for a transport format and the spec should not make it a requirement to do it that way.

The generic container requires less bookkeeping as you simply defer writing the linked objects for the result set and don't need to build per type collections for the entire object graph upfront. You can just dump all objects for the results then use a recurisive strategy to dump all linked objects. I keep a serialized flag on each object so I never output an object twice and that is all that is all the booking required. In essence the result set objects are visited twice, and each linked object is visited just once.

To approach a similar space efficiency with typed collections would require a first pass over the result set to output the results collection whilst setting a serialization pass state on each object to 1 (1st pass) to avoid visiting nodes more than once. This initial pass would recursively visit all linked objects to build a collection of known types to serialize (just the types not the actual objects). Then for each type, visit each node in the graph recursively, setting its pass flag to the pass number and dumping it to json if it matches the type being serialized. The point is that you have to visit each node in the object graph at least 1+N times where N is the number of linked types. This is less time efficient and more complex to implement than the generic collection approach.

The alternative typed collection approach is to build temporary typed collections (ie sorting into collections) with a single pass over the entire graph and then iterate over each typed collection. This reduces the graph visiting time complexity to 2 passes, but has much larger space and time overhead of managing the per type collections. It still doesn't get the same time efficiency of the generic collection approach except in the pathological case where there are no linked objects.

Given an object graph as input to a serializer any typed collection serialization approach is less time and space efficient and also more complex to implement than a generic collection approach.

IMO the spec needs to allow for a generic linked object collection because I'm not going to jump through hoops in my code, nor put unnecessary CPU and memory load my servers and introduce response latency just to keep conformity with the current spec when a generic collection works very well.

@dgeb
Copy link
Member Author

dgeb commented Jul 2, 2014

@ahacking I can see that a generic linked collection of related resources is consistent with a generic, non-typed data collection of primary resources. In fact, maybe the use of the two should be paired together (i.e. when data is used for the primary resource, then linked SHOULD be a non-typed collection as well).

TBH, I struggled with justifying keeping data as an option for the primary key after removing the explicit language about polymorphic (i.e. heterogenous) collections. However, I think that your use case is a good justification and your proposal for linked is consistent.

@dgeb
Copy link
Member Author

dgeb commented Jul 2, 2014

I should reiterate that in all of these cases, resources should not be repeated. If a resource is both a primary and linked resource, it should only be included in the primary collection. And a resource should never be duplicated in either collection.

@ahacking
Copy link

ahacking commented Jul 2, 2014

@dgeb. I agree if generic data collection is used then linked should be a generic collection too. Agree that resources should not be repeated however you could have a case where a single resource on a server has different viewpoints available at different endpoints and this should be allowed.

I had some questions with polymorphic types using typed collections; on one hand you may want to use a base type collection but still have the resources type set to its specific type. The collection a resource belongs to when part of the results collection really depends on what uri was requested but for linked resources it seems it should be specific to the actual type. This is a point of consternation which I don't have with generic collections.

I also had some concerns with link templates given they don't map onto the transport document structure, I can see cases for both transport and referrnce document mappings but I think the intent of the template rfc is so that clients don't have to have special knowledge to build URIs and currently they do need special knowledge. To be clear the key paths should be 'posts.links.comments' and not require special interpretation so that the templates may be used with RFC compliant implementations.

@gr0uch
Copy link
Contributor

gr0uch commented Jul 3, 2014

@ahacking,

To be clear the key paths should be 'posts.links.comments' and not require special interpretation so that the templates may be used with RFC compliant implementations.

RFC 6570 does not define any restraint on the variable names, other than the operator characters. The naming convention is specific to JSON API, and I think [collection].[attribute] works fine.

@ahacking
Copy link

ahacking commented Jul 3, 2014

@daliwali, The whole point of the rfc is to allow clients to build Uris in a mechanical way. I am suggesting that the keys under which the templates reside should follow the structure of the transport document precisely given the intent of link templates.
Also the variables used within templates should similarly follow the transport structure precisely.

As it is there is little point to link templates since to use them it requires the client to know the reference document structure already (which is different to the transport document) and if you know that then you don't need the complexity of link templates.

So again I say don't put stuff in the spec that is window dressing. If the intent is for hypermedia and self documenting apis then defer to what JSON schema is doing and keep it out of JSON api.

@steveklabnik
Copy link
Contributor

I am going to accept this pull request with the understanding that there are still some unresolved questions. It is 99.99% good, but rather than continue to argue about it here, let's talk about them afterwards.

@dgeb , you have done amazing, wonderful, hard work over the past weeks, and I cannot thank you enough.

steveklabnik added a commit that referenced this pull request Jul 5, 2014
@steveklabnik steveklabnik merged commit 6dc7bc7 into json-api:gh-pages Jul 5, 2014
@dgeb
Copy link
Member Author

dgeb commented Jul 5, 2014

@steveklabnik thank you so much!

And thank you to everyone who helped refine this PR with questions, suggestions and criticism. I agree with Steve that there are still some unresolved questions, which I'd now like to track and (hopefully) resolve in separate issues.

Everyone - please feel free to carry over any discussion from this PR or #234 in new, more focused issues. Thanks!

@dgeb dgeb mentioned this pull request Feb 24, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet