Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the OSLC 2.0 / 3.0 Compatibility Requirements #10

Open
jamsden opened this issue Dec 3, 2018 · 20 comments
Open

What are the OSLC 2.0 / 3.0 Compatibility Requirements #10

jamsden opened this issue Dec 3, 2018 · 20 comments
Assignees
Labels

Comments

@jamsden
Copy link
Owner

jamsden commented Dec 3, 2018

Are clients and servers expected to reflectively support both versions with no compatibility implied between them? This appears to be the current compatibility approach and guidance.

What is the recommendation for OSLC 2.0 capabilities not yet included in OSLC 3.0 - avoid them, implement them on a 3.0 server? Do we even know if that’s possible? Use a 2.0/3.0 hybrid server? Will vendors be willing to implement that? Is it practical?

Given a recommendation, what are the possible incompatibilities that might need to be discovered and addressed in the current OSLC 3.0 specifications?

@jamsden jamsden self-assigned this Dec 3, 2018
@jamsden jamsden added the major label Dec 3, 2018
@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - For a v3 server and a v2 client, using capabilities that are in both versions: either the server chooses to implement both versions, or use an intermediate proxy/gateway server to convert between the versions. I expect server implementors won‘t want to implement both, so I‘d suggest it‘s important for us to implement the conversion server & distribute it through Eclipse Lyo so it‘s easily available.

For a v2 server and a v3 client, using capabilities that are in both versions: either the client chooses to implement both versions, or use an intermediate proxy/gateway server to convert between the versions. As above, I would not expect clients to want to implement both.

So far I have only thought about discovery. If we want to benefit from LDP at all, I think we need to break backwards compatibility. I think creating conversion code that can either be downloaded & run, or incorporated into implementors‘ own products, is VERY important to enable uptake of v3 (and to preserve the implementation effort in v2).

As for individual domains, I would suggest making them backwards-compatible. But I do not know most of the domains well enough to know how much of a burden that would be on implementors of v3.

As for capabilities that are not being included in the current v3 work: the domain that I have worked with (Automation) is one of these, and in this particular case the v2 spec is silent (or implicit rather than explicit) about some of the parts that rely on core v2 discovery (that is, that rely on service providers & services). For using this capability from a v3 server, I think you will at least need an oslc:Service (defined in v2 and not in v3), but discovery of that (on a v3 server) can be via an LDPC (as long as the conversion code/server can correctly map that onto v2 catalogs & service providers).
However, I would expect implementors of these domains are more likely to either just implement v2 or implement both versions (using whatever compatibility/interoperability notes we produce), as their target consumers are more likely to be v2-only.

So, I suggest we break backwards compatibility for discovery, but provide conversion code (including as a download that can run as a standalone server) to convert both ways between v2 and v3 (where possible). But try to keep backwards-compatibility in individual domains. And produce notes on how to be natively compatible with v2 and v3 (hopefully without detecting the client‘s desired version). And produce notes on how to link to domains/capabilities that are only defined in v2.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

Martin, thanks for the thoughtful analysis. I completely agree with all you points and have already started working on the protocol converter - either through a proxy server and/or through a new client API for 3.0 that deals with 2.0 interoperability (or both).

The only issue is that until we have attempted to develop such a converter (no matter how), we will not know what the barriers are for converting between 2.0 and 3.0 common capabilities. That is, there may be information missing in the 3.0 specifications that would be necessary for full interoperability across the versions. The risk is sending the 3.0 specifications to public review without knowing these gaps.

The 3.0 Compatibility note should cover all the information needed to create this protocol converter.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

I have updated https://wiki.oasis-open.org/oslc-core/Discovery#preview with a motivation for OSLC 2.0 and 3.0 interoperability and a table of OSLC capabilities and how they are addressed in the different versions. From this brief analysis, it appears he primary issues that need to be explored are:

  • Discovery compatibility
  • Ability to implement existing OSLC 2.0 capabilities (Actions, Automation, Query, Tracked Resource Set) on an OSLC Core 3.0 server
  • Resource Shapes in support of POST and PUT constraints

If anyone is willing to take on any of these items, let Martin or I know.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - The table on the Discovery wiki page looks accurate to me.

I don‘t expect the resource shape vs W3C RDF data shapes to be a problem, as clients can‘t rely on servers to provide the shapes anyway.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

Worst case is that clients doing POST and PUT without a constraining Resource Shape might expect more 4xx responses. For resource creation, this shouldn‘t be a big problem because clients could have access to a creationDialog. For Update, the large preview to be useful.

Do we have experience with OSLC 2.0 resource shapes and client/server interactions that would inform the need for resource shapes for OSLC 3.0? Is this something that can be reasonably deferred at no great loss to existing or new client/server implementations and interactions?

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - Personally I can only speak from the experience of our implementation, but we didn‘t use resource shapes.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: ndjc - OSLC 2.0 resource shapes are used by IBM‘s Jazz Reporting Services and Rational Lifecycle Engineering Manager. The transition path from OSLC 2.0 resource shapes to W3C Data Shapes is not yet clear.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

Is there a reason that the RDFS or OWL schemas couldn‘t be used to reflect on the vocabularies to reflectively support queries and views?

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: ndjc - With the Open World Assumption, and no Unique Name Assumption, RDFS and OWL cannot express constraints of the form that query builders and views might need; rather, such schemes infer extra triples. This is the entire reason behind the W3C Data Shapes initiative.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: ndjc - The OSLC 2.0 Specification includes this statement:

Rules for new versions of OSLC specifications
When specifying a new version of an OSLC specification the rule is this:
A new version of an OSLC specification is not allowed to introduce changes that will break old clients.

Given that strong statement, I believe that OSLC Core 3.0 needs to be upward compatible with Core 2.0. Without compatibility, there will be a very heavy burden to both clients and servers to upgrade, probably outweighing the advantages, leading to very limited uptake for the new version.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - That depends if we can take the [very] loose interpretation that providing a protocol converter means that we aren‘t breaking it...

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

I‘ll try to summarize where we are on this issue, and where we might want to consider going. We can queue this topic for discussion next meeting, and I have an agenda item with the OSLC-SC to get their input and feedback.

First let‘s list the goals of the OSLC Core 3.0:
1. Support interoperability and integration of client and server lifecycle applications
2. Reduce the implementation burden for clients and servers
3. Provide a stable, long-lived platform for developing OSLC domain specifications
4. ...

Compatibility requirements that support those goals:
1. OSLC 2.0 client can access OSLC 3.0 server without change
2. OSLC 3.0 client can have limited discoverable access OSLC 2.0 server
That is, OSLC 2.0 and 3.0 clients should interoperate with OSLC 2.0 and 2.0 servers where the discovered capabilities overlap.

Implications of these compatibility requirements:
1. All Core 2.0 capabilities must be in scope (query, paging, resource shapes, TRS, Actions
2. Anything that is a MAY or SHOULD in Core 2.0 is a candidate for inclusion in 3.0
3. Any MAY or SHOULD OSLC Core 2.0 capability that is not included in or does not overlap with the a OSLC Core 3.0 capability can be left out of OSLC Core 3.0 and clients and servers can continue to use the open-services.net specification of the capability. In some cases, OSLC Core 3.0 may which to include MAY and SHOULD references to relevant sections in OSLC Core 2.0.
4. Any MAY or SHOULD OSLC Core 2.0 capability that is included in or does overlap with the corresponding OSLC Core 3.0 capability SHOULD be supported by OSLC Core 3.0 in a backward compatible way. OSLC
4.1. Core 3.0 can extend or provide alternative implementations that complement or perhaps in the future could lead to deprecation of OSLC Core 2.0 features.
4.2. We could refine the list of MAY and SHOULD conformance criteria details by doing a survey of what current clients and servers actually used if compatibility with 2.0 would lead to undesirable technical debt in Core 3.0

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

High level summary of potential changes to Core 3.0 in order to provide compatibility with 2.0:

1. Authentication - Minor changes: ensure HTTP challenge/response is sufficient for OAuth2, other OAuthConfiguration URIs would be covered under discovery below.

2. Resource Discovery and Representations - Significant change: provide two discovery mechanisms in Core 3.0, one that supports more static discovery using Service Provider Catalog compatible with Core 2.0 and another that supports more dynamic, incremental discovery based on LDP and link headers. OSLC 2.0 vocabulary and resource definitions would need to be supported, and specific RDF/XML resource representations (that clients might be relying on for parsing) would need to be supported.

3. Resource Operations (Create, Read, Update, Partial Update, Delete) - Some minor updates to address If-Match header on update, partial update and paging.

4. Delegated UI (for creation and selection) - Minor changes - just ensure Dialog resource is consistent with v2.0. Discovery changes are handled above. windowName protocol isn‘t included in 3.0 for dialog interaction with the parent window. postMessage format is a little different.

4.1 Preview - Minor changes: besides those already addressed by Discovery, Preview.initialHeight is not included in OSLC 3.0 and the resize messages are a little different.

5. Query Service - Minor change: Core 3.0 may want to include a MAY conformance criteria referencing OSLC 2.0 Query Service as this is some that client and servers currently use. Other query services such as SPARQL may also be provided.

6. Resource Paging - Should reference OSLC Core 2.0 paging as current 2.0 clients will be using this. This is a candidate for deprecation in the future if LDP supports paging.

7. Resource Constraints (Resource Shapes) - Minor change to use ldp:constrainedBy and say that servers MAY use ResourceShapes to define the constraints.

8. Tracked Resource Set - No change: Not an issue for OSLC Core 3.0 as clients and servers can continue to use OSLC 2.0 TRS.

9. Attachments - No change: Not defined in OSLC 2.0 core, so there are no compatibility issues.

10. Actions - No change: Not an issue for OSLC Core 3.0 as clients and servers can continue to use OSLC 2.0 Actions.

11. Automation - No change: Not an issue for OSLC Core 3.0 as this is a domain specification and clients and servers can continue to use OSLC 2.0 Automation.

12. Error responses - no change

13. Vocabulary - some changes and some resources that aren‘t defined in 3.0 including Discussion, Comment.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: ndjc - 12. Vocabulary - the proposed 3.0 core vocabulary omits some items used by 2.0. Since the vocabulary namespace itself (http://open-services.net/ns/core#) is not versioned, we need to keep all the 2.0 terms, and make only additions or other compatible changes. We can, if appropriate, add vs:termStatus values to indicate ‘unstable‘ or ‘archaic‘ terms. Note that the vs vocabulary is defined on a page http://www.w3.org/2003/06/sw-vocab-status/note that make no guarantee of stability:
Status of This Document
This is a pre-draft from Dan Brickley, attempting to capture something of the thinking behind the vocab-status-ns work. Not yet reviewed by Leigh and Libby, and changing at random.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

Nick is the core vocabulary document derived from the Resource Shapes describing the classes and properties rendered in the other specifications using ReSpec?

If so, then your Vocabulary requirement is that the Resource Shapes must be upward compatible - can add classes and properties, but not remove them. Correct?

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: ndjc - The vocabulary and shapes documents are independent - neither is derived from the other. ReSpec forms the human-readable tables in the HTML specs from the shapes, and forms a human-readable HTML representation of the vocabulary, as two separate transforms.

Although there is some overlap between the shapes and the vocabulary (particularly in the names and descriptions of the RDF terms), there are some key differences that make it difficult to derive one from the other. In simple cases, the shapes are richer, and one could generate a vocabulary from a shape with the addition of a small amount of extra info about the vocabulary itself.

However, a vocabulary term could be used in two or more different contexts, where the description of the term is somewhat generic, while the shape describes a specific property that may appear in one or more classes - that is, the shape dcterms:description of a property may be more specific to a particular context than the rdfs:comment on the vocabulary term.

Finally, there are some additional properties that one can put on RDF terms in the vocabulary (vs:term\_status, the inverse label and traceability properties defined in http://open-services.net/wiki/core/Vocabulary-Annotation-Vocabulary/, etc.)

One way to look at it is that the vocabulary document defines the meaning of each individual term (the vocabulary), while the shapes defines the way things are put together (the grammar).

But your conclusion is still correct - the shapes must be upward compatible. It is incomplete - the vocabulary must also be upward compatible.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - Service provider catalog:
I think this is one of the areas that can require extra effort on the part of server implementors (is that what you meant by "technical debt"?), but I doubt we can do much about that, as it is a critical piece of backwards-compatibility. All I can think of doing is making sure that Eclipse Lyo (and preferably a protocol converter-style server too) can produce an entire catalog from the other kind of discovery (so a server can implement the newer approach, and as part of its compliance with OSLC include the library or converter to provide the catalog form). So then the burden on the server implementors may be a question of include the appropriate library or converter process - they don‘t have to implement it themselves if they don‘t want to. The consequence of that is that we have to make sure that all information that is required in the service provider catalog is also required in the newer discovery mechanism.

Query capability:
It makes sense to reference this. In terms of core I don‘t believe this is a MUST. It being a MUST would introduce a lot of extra effort by the server implementors. I know at least one domain specification (Automation) makes query a MUST on at least one type of its resources (but at least one implementation doesn‘t implement it fully). I know that‘s a problem for that spec, not for core directly, but I‘m mentioning here as it‘s on the topic of compatibility & implementation burden.

Resource shapes:
In my opinion, I expect that we can remove some properties from resource shapes without causing backwards-compatibility problems. If a property is zero-or-more or zero-or-one at v2, then I expect we can remove it without a problem - its absence from the shape doesn‘t prevent it from being used either by clients or servers. I expect we can neither add nor remove properties with other cardinalities. Also, I expect adding properties should be limited in the same way - we can only give new properties cardinalities of zero-or-one or zero-or-more.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

Some TC members met on June 4 to discuss compatibility requirements and formulate a proposal for addressing those requirements. That proposal is described in this Wiki page: https://wiki.oasis-open.org/oslc-core/V2Compatibility#preview.

The TC intends to hold a vote on this proposal at the next TC meeting, currently scheduled for June 11. The TC members are highly recommended to review this proposal, and add any comments or content directly in the Wiki to clarify or provide any missing information. This will ensure we are prepared to hold this vote, and understand the full implications of compatibility.

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: martinpain - Just to clarify, does the proposal consist of solely the first paragraph and first bullet point under the "Proposal..." header? (The rest of the text that follows that isn‘t worded as a proposal).

@jamsden
Copy link
Owner Author

jamsden commented Dec 3, 2018

From: msarabura - I‘ve added a new header "Implications for specifications" to clarify. Hope Jim doesn‘t mind, otherwise unchanged.

The implications for Resource Operations are not clear to me. Is the intention to make 3.0 "Compatible" with 2.0?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant