-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify spec wrt readOnly and writeOnly in referenced schemas #1622
Comments
Another instance of 'relevancy' but this time using the wording
Crops up with regard to the use of the xml object in #1435 More clarity and consistency in these (and other?) instances would be helpful. |
@OAI/tsc Agrees that wording should be clarified to allow the above scenario. @tedepstein will create PR with better words. |
Note that we address It is relevant in APIs as a protocol-independent way to document the behavior of an entire resource. In HTTP |
@handrews, thanks. We have so far only agreed to clarify the spec, not to expand usage to root schemas. We could address root schemas later if there is a need, or if the TSC decides to formally align with a draft of JSON Schema that includes |
Make it clear that `readOnly` and `writeOnly` are allowed in any Schema Object, but only effective when applied (directly or by reference) to a property subschema. Adds similar language to the `xml` property, addressing [Mike's comment](OAI#1622 (comment)). Fixes OAI#1622.
@darrelmiller @tedepstein Since we seem very likely to move to JSON Schema 2019-09 in OAS 3.1, this problem will go away since (as noted above) the issue is now addressed in the JSON Schema spec. We haven't had anyone complain about how we specified it so it seems to be working out OK. |
@tedepstein @darrelmiller @webron the discussion in #2110 reminded me that in addition to covering the topic of these keywords in root schemas, JSON Schema is a bit more lax than OAS in terms of whether read-only and write-only values are to be included in requests and responses respectively. Here is the relevant part for both topics:
Note that since JSON Schema has more use cases than just APIs, it talks about an "owning authority" instead of a server- in the case of HTTP-based APIs, the owning authority is the HTTP server. In our experience, it is particularly important to allow Do OAS users actually strip out This issue should be added to @philsturgeon's tracking list in #2099. |
@tedepstein is this something you have time for, or shall I take a swing at it? |
@philsturgeon , I can submit a PR, but need to verify a couple of things. First, I assume this change should target 3.1. And in the current 3.1 spec, @handrews , I agree with you on this point:
I think it's beneficial for OpenAPI to specify uniform treatment of
Points 1 and 2 build on OpenAPI 3.x and JSON Schema by specifying the expected behavior when a Point 3 reinstates the OAS 3.0 semantics of
Thoughts...? Point 4 could also be an issue, because JSON Schema says that a readOnly request or writeOnly response "MAY be ignored if sent to the owning authority, or MAY result in an error, at the authority's discretion." JSON Schema does not seem to give us the option to ignore |
@tedepstein thanks for working through the cases here! Regarding points 1 and 2, I think it would be a really good idea to explicitly call out the GET-modify-PUT cycle as a place where the SHOULD should (SHOULD?) be ignored. I've encountered junior engineers who think these requirements mean the fields need to be stripped out before sending a PUT request. In general, people don't always have a good feel for when an exception to a SHOULD is appropriate. I think this one is clear enough, and fundamental enough to API design, that it warrants a call-out. Regarding point 3, I view one of the goals of JSON Schema compatibility to be that validation can be handed off to a generic JSON Schema validator without knowledge of the API context. The validator might need to understand the OAS extension vocabulary, but understanding extensions is now within the JSON Schema specification. The validator cannot, however, know whether an instance is a "read" instance or "write" instance, so there's no way for it to selectively disable the In terms of I've left off If I absolutely have to, I Regarding point 4, in HTTP using these in a root schema is pretty irrelevant. If there's a GET you can read, if there's a PUT or DELETE you can write. You can use the In other contexts, you don't have that extra information because you don't have as rich of a transport protocol. That I think it's fine but redundant for people to use root schema |
OK, I can work that scenario into the revision.
This still has me a little concerned for a few reasons:
I'm not sure what to suggest. Kicking around some thoughts, and I'll post back soon. |
@tedepstein I'll be interested to hear what you come up with. It might be worth getting clarity on the importance of being able to hand validation off to a totally vanilla (plus OAS extension module) JSON Schema validator. Because there is intentionally no way* to supply the validator with context to change its behavior. So having validation behavior change based on something like "is this instance from a request or response" is absolutely noncompliant. In general validation is not aware of application context- things that need application-awareness must be signaled using annotations. *the exception being the partial ability to control |
Prepare to be underwhelmed. :-)
This is the only practical solution I've come up with -- just accept that the full semantics of Hopefully, there won't be many other cases like this in 3.1. But where we do have these incompatibilities, we should probably call them out explicitly in the 3.1 spec. And in 4.0, unless some better option presents itself, we can drop these special semantics and just go with standard JSON Schema. @philsturgeon, how do you feel about this compromise? Do you see any other way around it?
This was the other idea I was batting around. Not a hard-coded, predefined notion of "request" or "response," but a user-defined context that is basically a runtime argument to the schema validator. Schemas could have certain properties that are only defined in a given context. Or maybe If JSON Schema had that generic capability, we could use it to define "request" and "response" contexts, and specialize schema validations in that manner. But it sounds like you considered this, and you don't want to do it. Barring that, some languages can model business domains at a level of abstraction above physical schemas, define context-specific variants, and validate against those variants (or generate physical schemas for them). RAPID-ML calls these variants realizations. But (full disclosure) the only implementation of RAPID-ML is in RepreZen API Studio. And it looks like alternative schemas won't be available in OpenAPI until at least a 4.0 release. |
I'm not sure I've really considered readOnly and writeOnly in the context of validation, but I should have. For me its usually documentation and mocking creating example response bodies when given a schema. Should it shove created_at in the request body, etc. If you try and update a "id" property... agh. I don't want to create any technical differences from JSON Schema, but maybe we can provide advice for some types of tooling of how to handle these things instead? |
Question: Would it be a breaking change if we softened the "SHOULD NOT be sent" language to say "MAY be omitted?"
It doesn't really solve the problem of JSON Schema compatibility, but it does make it easier to pivot to the GET-modify-PUT scenario as a case where it might be better to leave the property in the request. I don't think this should be a breaking change. "SHOULD NOT be sent" and "MAY be omitted" both allow for cases where the property is present or absent. "SHOULD" just adds guidance about which way is preferred, while "MAY" is neutral. |
It's less a case of creating technical differences, more just trying to cope with the technical difference we already have. OAS v3 makes Practically speaking, you only hit this problem when all of the following are true:
So there are three corresponding solutions for API clients (including client libraries and API testing tools), servers (including mocks), code generators, example generators, and other tools:
In this particular corner case, I don't see a way to make validation fully compatible with JSON Schema without a breaking change to the spec. We've made a clear commitment to backward compatibility in minor spec releases. We've also set a goal of full JSON Schema compatibility, and we're very nearly 100% there, AFAICT. But IMO there's a little more wiggle room to compromise on JSON Schema compatibility than on backward compatibility. It does suck, I know. If we can't stand the thought of compromising JSON Schema compatibility, I can still see two options:
I'll venture a guess that Option 1 is a nonstarter. Option 2 might seem radical, but it goes back to my argument in issue #2019:
Case in point: changing the OAS semantics of |
Yeah that's a nonstarter for me. I could bring it up with other JSON Schema folks, but to me this would be a huge application-specific warping of the consistent JSON Schema architecture that we've spent the last year and a half putting together. It would be a breaking change on our side, not just in terms of a keyword behaving differently but in terms of changing the scope of what keywords and validators do in general. |
I would be in favour of pointing people at the schema pre-processing solutions suggested by @tedepstein if/when this comes up (i.e. not in the spec itself) |
@MikeRalphson @tedepstein I'm a bit confused on the pre-processing idea. I might be fine with it, but it's a little unclear where the "preprocessor" fits in and what the messaging is around this behavior. There are already at least two issues filed showing confusion over the use of I think it will help to clarify what counts as a "JSON Schema implementation" vs an "application" that sits on top of JSON Schema. If OAS 3.1 is going to claim JSON Schema compatibility without any asterisks, then it has to set the expectation that anything handed as written in the OAS document to an implementation (most notably a validator) will behave according to the JSON Schema spec. However, there are lots of things in the OAS ecosystem that should be considered "applications", including but not limited to:
If the code generators are aware of whether they are being generated for (or used in) the client vs server, then they can do different things, and documentation renderers can generate notes about client vs server. But we can't claim "compatibility" if the schema gets changed from what's written into something else to be handed off to JSON Schema. Putting sleight of hand in so that it looks like In the "preprocessor" approach, what does the user see? |
oops, left junk in the end of that last one from edits- sorry about that, please read on the web site. Adding comment for email notification |
The main scenario I'm thinking about is runtime request and/or response message validation on the client and/or server. In any of these cases, we would ideally want the schemas embedded in or referenced from the OAS document to be pure JSON Schema, so a JSON Schema implementation can validate the messages. To pass this test, there are some prerequisites intrinsic to JSON Schema:
On top of these, the OpenAPI validation use case implies three additional criteria:
The problem is with the first rule: OpenAPI has this special interaction between
The preprocessing ideas involve design-time or runtime adapters that address the JSON Schema incompatibility by compromising rule 2 or rule 3, above. Adapting Schema and Schema BindingsAssume an OpenAPI contract like this: openapi: "3.0.3"
info:
version: 1.0.0
title: Example for OAI/OpenAPI-Specification issue 1622.
paths:
/foo:
post:
requestBody:
content:
"application/json":
schema:
$ref: "#/components/schemas/FooObject"
responses:
201:
description: Created Successfully
content:
"application/json":
schema:
$ref: "#/components/schemas/FooObject"
components:
schemas:
FooObject:
type: object
required:
- id
properties:
id:
type: string
readOnly: true
name:
type: string According to OpenAPI, If the client sends this request: {
"name": "Ted"
} OpenAPI says its valid, but JSON Schema says it's not. Let's assume the API server implementation wants to do request validation. If it just hands that request and the embedded schema to a JSON Schema validator, it will fail validation, as expected by JSON Schema, but not in compliance with OpenAPI 3.x. The server API implementation could, on its own or with the help of an OpenAPI tool or framework, adapt the OpenAPI contract to make validation work the way OpenAPI says it should. It could do this dynamically at runtime, or by use of a code generator at design time. An OpenAPI contract adapted at design time might look like this: openapi: "3.0.3"
info:
version: 1.0.0
title: Example for OAI/OpenAPI-Specification issue 1622.
paths:
/foo:
post:
requestBody:
content:
"application/json":
schema:
$ref: "#/components/schemas/FooObject_REQUEST"
responses:
201:
description: Created Successfully
content:
"application/json":
schema:
$ref: "#/components/schemas/FooObject_RESPONSE"
components:
schemas:
FooObject:
type: object
properties:
id:
type: string
readOnly: true
name:
type: string
FooObject_REQUEST:
allOf:
- $ref: '#/components/schemas/FooObject'
FooObject_RESPONSE:
allOf:
- $ref: '#/components/schemas/FooObject'
required:
- id So in this scenario, the API implementation invokes the standard JSON Schema validator with the request body (unmodified) and the Similarly, a client wanting to do response validation could use static code generation or runtime adaptation to generate the above schema, and validate the response against Adapting Request and Response MessagesAnother approach is to modify the request or response message to inject required property values. In the above scenario, the request message: {
"name": "Ted"
} could be adapted to this: {
"id": "PLACEHOLDER_VALUE",
"name": "Ted"
} The server implementation would invoke a standard JSON Schema validator with the original schema and the modified request body. It could use hand-written code to do this adaptation, or could use an OpenAPI-aware runtime library with that behavior built-in.
The above preprocessor/adapter ideas fail that test.
Yeah. Thus my earlier assessment: this kind of sucks. We can create an article somewhere with these adaptation/preprocessing ideas, along with other workarounds and guidelines. But these things are just Band-Aids over a JSON Schema incompatibility. They are not a way for us to claim 100% compatibilty. |
@handrews @tedepstein @darrelmiller in the last TSC I confused two issues and said I was writing up something for this issue, but I am not. I confused this with #2017, which I am going to take a stab at now. Henry, I think this might fall on your lap, and/or need more discussion on Thursday. |
@darrelmiller this needs a TSC decision on whether we're going to break SemVer or whether we're going to break JSON Schema compatibility. Those are the two options, everything else is just details about how to document it. At the last call where this was discussed, the question was raised about whether any tooling, particularly any validator, even handles this correctly at all right now. In practice, it's only a breaking change to change this if someone somewhere actually relies on it in code. @philsturgeon there's nothing I can do here without that decision. |
This reply is you handling it so: great! 🥳 |
This has been addressed in 3.1-rc0 by removing readOnly/WriteOnly from the spec and dropping semantic versioning. |
@handrews @handrews @philsturgeon @darrelmiller
|
@handrews |
@ChaitanyaBabar this is a closed issue and Henry andrews doesn’t owe you anything, let’s not pester contributors for answers especially so long after their involvement has passed. please feel free to create a new issue with brand new context and limit how many people you ping directly as this effects everyone’s inboxes. Thanks for understanding! |
The current spec says that
readOnly
andwriteOnly
are "relevant only for Schemaproperties
definitions."I have encountered at least two parsers, one new, one widely used, that interpret this in the most literal sense, meaning "discard
readOnly
andwriteOnly
if they occur in a top-level schema definition."The parsers either have their own logic that does this, or they parse into an object graph that:
readOnly
andwriteOnly
.With this interpretation, reusable schemas can't be intrinsically
readOnly
orwriteOnly
by definition. The following won't work:I would interpret "relevant" to mean that the
readOnly
andwriteOnly
annotations are only effective in the context of a property subschema. Not that they are disallowed, safe to ignore, or safe to discard when declared in other contexts.I think my interpretation is probably consistent with the original intent, but I wasn't part of that discussion. The word "relevant" is ambiguous, and the spec only uses that word in the
readOnly
andwriteOnly
descriptions.I think the simple answer is to clarify the meaning of the spec. The current spec for
readOnly
starts like this:I'd propose changing it to:
Similar change proposed for
writeOnly
. Happy to open a PR with this change if the TSC agrees.The text was updated successfully, but these errors were encountered: