Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why FAIR*? #8

Open
AlasdairGray opened this issue Nov 29, 2017 · 5 comments
Open

Why FAIR*? #8

AlasdairGray opened this issue Nov 29, 2017 · 5 comments

Comments

@AlasdairGray
Copy link

What is the connection between this ontology and the FAIR data principles?

@stain
Copy link

stain commented Nov 29, 2017

Agree there is danger of FAIRification buzzword bingo – the ontology is a nice addition to the SPAR collection to also cover reviews.

So the question is, how do we make reviews FAIR? It's pretty sure most of them are not today, they are generally neither findable, accessible, interoperable and certainly not reusable. And this ontology might be needed in order to do so.

https://linkedresearch.org/calls is proposing to use Linked Data Notification for such replies.

The Web Annotation Model has motivations such as oa:assessing and oa:questioning that could also be relevant.

Which principles?

OK, let's try to relate to the actual FAIR guidance principles

F1. (meta)data are assigned a globally unique and persistent identifier

Linked to repository - where do reviews go and how are they retrieved? Not specified in ontology, but in user patterns and examples.

One problem: Reviews might live in more than one place, although yes, they are generally in some tracking system like EasyChair.

Some journals like F1000Research and PeerJ assign DOIs to each review, e.g. https://doi.org/10.5256/f1000research.13348.r25610 or https://doi.org/10.7287/peerj-cs.132v0.2/reviews/3

F2. data are described with rich metadata (defined by R1 below)

I guess here the FAIR Review ontology comes in, although combinations of schema.org/DCTerms, OA and PROV probably would do it as well if their combination was well documented.

F3. metadata clearly and explicitly include the identifier of the data it describes

I think in this case both of the review and the reviewed document. Not clear from ontology - need further guidance.

F4. (meta)data are registered or indexed in a searchable resource

Would need some kind of repository of reviews - per venue is not sufficient - e.g. I would want to collate all my reviews; someone else wants to find all reviews of a particular article. I think it probably should be a FAIR principle that metadata could be in multiple locations.

A1. (meta)data are retrievable by their identifier using a standardized communications protocol
A1.1 the protocol is open, free, and universally implementable

As we all know, this is a solved problem, let's just do http/https with permalinks like https://w3id.org

A1.2 the protocol allows for an authentication and authorization procedure, where necessary

This would be important if we want to describe non-open reviews - e.g. where it's only accessible that you HAVE done a review, but not what it is (perhaps not even of which article). But those with access should be able to see it (e.g. member of programme committe).

Perhaps saying if a review is open or not would be important metadata.

A2. metadata are accessible, even when the data are no longer available

Separation of concern - so the annotations using the FAIRReview ontology must be possible to separate from the review text itself. This just means that the ontology can be used in other RDF formats than RDFa.

I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation.

Is it RDFa or JSON-LD like in Schema.org? I think it must be recommended to be usable - "any RDF anywhere" is too vague. Not solved by ontology.

I2. (meta)data use vocabularies that follow FAIR principles

So the vocabularies themself must be FAIR.. hence FAIR* Reviews ontology is open etc. I think this one is ticked already.

I3. (meta)data include qualified references to other (meta)data

Not sure.. is this satisfied by using SPAR ontology? Not sure how you would follow which links.

R1. meta(data) are richly described with a plurality of accurate and relevant attributes

Well, we'll have to review the attributes of the ontology.. not quite sure yet, as it seems to have

R1.1. (meta)data are released with a clear and accessible data usage license

Not defined now, but very important - what's the license of the metadata vs license of the review text vs license of the reviewed manuscript?

R1.2. (meta)data are associated with detailed provenance

Not clear from ontology how this should be expressed. Further guidance needed.

R1.3. (meta)data meet domain-relevant community standards

Well, this one will have to be judged by the "community". :)

@stain
Copy link

stain commented Nov 29, 2017

BTW; I have a couple of Open Reviews listed on https://www.research.manchester.ac.uk/portal/en/researchers/stian-soilandreyes(0b55a0bb-452c-455a-8af6-fe7cc4094c83)/activities.html (hopefully ultimately linking you to my gists) - making them loosely Accessible, but not Findable (e.g. "reply" to a publication/DOI).

It would be interesting to see how these can be marked up (ignoring for the moment that Github Gists don't really support linked data annotations in MarkDown) - as your current ontology approach seems to assume that the annotations are created as part of a fixed publication-reviewing workflow like in EasyChair, and not for independent or "self-opened" reviews, or continual open reviews like in F1000Research.

As for Reusable - we can look at articles that are initially rejected and then since submitted elsewhere in a revised version - that would be a case for review-to-review citations which generally do not happen today.

@idafensp
Copy link

@AlasdairGray I would love to answer your question, but I think @stain analysis does that better than I could :) The vocabulary by itself does not guarantee that the generated reviews align with the FAIR data principles. Additional infrastructure for hosting them as linked data is necessary, following those principles. We are currently working on that. We wanted to release a 0.1 version so as to generate discussion and get feedback.

@AlasdairGray
Copy link
Author

In that case, I think the name of the ontology is misleading. It would be far better to call it a academic review ontology, or something of that sort, and leave the FAIR story for your surrounding infrastructure.

@idafensp
Copy link

@stain thanks for your discussion about the model, it is highly interesting. As you have stated throughout your comments, the ontology by itself can not guarantee the review FAIR-ness (no vocab can do that actually). We are working on a system for that.

Agree there is danger of FAIRification buzzword bingo

Yes, that is true, we discussed quite a lot about the naming, but even when the ontology itself does not "FAIRify" a review, is part of the solution. We might change it in the future.

BTW; I have a couple of Open Reviews listed on

We were aware of it :). We found them when looking for opencitations reviews.

and not for independent or "self-opened" reviews

Not sure what you mean by this. One of the goals we have is to support self-publication of reviews, in which the author and the entity requesting the review are the same individual.

It would be interesting to see how these can be marked up

Interested in joining our beta testers team? :)

fairreviews pushed a commit that referenced this issue Jan 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants